Jan 03 04:16:11 crc systemd[1]: Starting Kubernetes Kubelet... Jan 03 04:16:11 crc restorecon[4703]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:11 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 04:16:12 crc restorecon[4703]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 03 04:16:12 crc kubenswrapper[4865]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.991127 4865 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994345 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994365 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994369 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994373 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994381 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994387 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994407 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994415 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994420 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994425 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994429 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994433 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994437 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994440 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994444 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994448 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994451 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994455 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994458 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994462 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994466 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994470 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994475 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994479 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994483 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994487 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994491 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994495 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994503 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994508 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994512 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994516 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994519 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994523 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994527 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994531 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994534 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994538 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994541 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994546 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994551 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994555 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994559 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994562 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994566 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994569 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994576 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994582 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994586 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994591 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994595 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994600 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994604 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994609 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994613 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994616 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994620 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994623 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994627 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994631 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994634 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994638 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994641 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994644 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994648 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994652 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994659 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994663 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994667 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994671 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.994676 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994776 4865 flags.go:64] FLAG: --address="0.0.0.0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994785 4865 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994793 4865 flags.go:64] FLAG: --anonymous-auth="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994799 4865 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994805 4865 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994811 4865 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994817 4865 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994823 4865 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994827 4865 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994832 4865 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994837 4865 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994841 4865 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994846 4865 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994851 4865 flags.go:64] FLAG: --cgroup-root="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994854 4865 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994859 4865 flags.go:64] FLAG: --client-ca-file="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994862 4865 flags.go:64] FLAG: --cloud-config="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994866 4865 flags.go:64] FLAG: --cloud-provider="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994870 4865 flags.go:64] FLAG: --cluster-dns="[]" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994875 4865 flags.go:64] FLAG: --cluster-domain="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994879 4865 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994884 4865 flags.go:64] FLAG: --config-dir="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994887 4865 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994892 4865 flags.go:64] FLAG: --container-log-max-files="5" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994898 4865 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994901 4865 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994906 4865 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994910 4865 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994914 4865 flags.go:64] FLAG: --contention-profiling="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994918 4865 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994922 4865 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994926 4865 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994931 4865 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994936 4865 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994939 4865 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994944 4865 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994948 4865 flags.go:64] FLAG: --enable-load-reader="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994953 4865 flags.go:64] FLAG: --enable-server="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994957 4865 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994962 4865 flags.go:64] FLAG: --event-burst="100" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994967 4865 flags.go:64] FLAG: --event-qps="50" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994971 4865 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994975 4865 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994979 4865 flags.go:64] FLAG: --eviction-hard="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994985 4865 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994989 4865 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994993 4865 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.994997 4865 flags.go:64] FLAG: --eviction-soft="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995001 4865 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995006 4865 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995010 4865 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995014 4865 flags.go:64] FLAG: --experimental-mounter-path="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995019 4865 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995023 4865 flags.go:64] FLAG: --fail-swap-on="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995027 4865 flags.go:64] FLAG: --feature-gates="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995032 4865 flags.go:64] FLAG: --file-check-frequency="20s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995036 4865 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995040 4865 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995044 4865 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995048 4865 flags.go:64] FLAG: --healthz-port="10248" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995052 4865 flags.go:64] FLAG: --help="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995056 4865 flags.go:64] FLAG: --hostname-override="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995060 4865 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995065 4865 flags.go:64] FLAG: --http-check-frequency="20s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995071 4865 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995076 4865 flags.go:64] FLAG: --image-credential-provider-config="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995081 4865 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995086 4865 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995091 4865 flags.go:64] FLAG: --image-service-endpoint="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995095 4865 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995099 4865 flags.go:64] FLAG: --kube-api-burst="100" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995104 4865 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995111 4865 flags.go:64] FLAG: --kube-api-qps="50" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995117 4865 flags.go:64] FLAG: --kube-reserved="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995121 4865 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995125 4865 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995129 4865 flags.go:64] FLAG: --kubelet-cgroups="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995133 4865 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995137 4865 flags.go:64] FLAG: --lock-file="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995142 4865 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995145 4865 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995149 4865 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995155 4865 flags.go:64] FLAG: --log-json-split-stream="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995160 4865 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995164 4865 flags.go:64] FLAG: --log-text-split-stream="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995168 4865 flags.go:64] FLAG: --logging-format="text" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995172 4865 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995176 4865 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995181 4865 flags.go:64] FLAG: --manifest-url="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995185 4865 flags.go:64] FLAG: --manifest-url-header="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995191 4865 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995195 4865 flags.go:64] FLAG: --max-open-files="1000000" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995200 4865 flags.go:64] FLAG: --max-pods="110" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995204 4865 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995208 4865 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995212 4865 flags.go:64] FLAG: --memory-manager-policy="None" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995217 4865 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995221 4865 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995225 4865 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995230 4865 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995239 4865 flags.go:64] FLAG: --node-status-max-images="50" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995243 4865 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995247 4865 flags.go:64] FLAG: --oom-score-adj="-999" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995251 4865 flags.go:64] FLAG: --pod-cidr="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995256 4865 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995264 4865 flags.go:64] FLAG: --pod-manifest-path="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995268 4865 flags.go:64] FLAG: --pod-max-pids="-1" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995272 4865 flags.go:64] FLAG: --pods-per-core="0" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995276 4865 flags.go:64] FLAG: --port="10250" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995281 4865 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995285 4865 flags.go:64] FLAG: --provider-id="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995289 4865 flags.go:64] FLAG: --qos-reserved="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995293 4865 flags.go:64] FLAG: --read-only-port="10255" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995298 4865 flags.go:64] FLAG: --register-node="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995302 4865 flags.go:64] FLAG: --register-schedulable="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995306 4865 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995317 4865 flags.go:64] FLAG: --registry-burst="10" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995321 4865 flags.go:64] FLAG: --registry-qps="5" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995325 4865 flags.go:64] FLAG: --reserved-cpus="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995329 4865 flags.go:64] FLAG: --reserved-memory="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995334 4865 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995338 4865 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995343 4865 flags.go:64] FLAG: --rotate-certificates="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995347 4865 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995351 4865 flags.go:64] FLAG: --runonce="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995355 4865 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995359 4865 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995364 4865 flags.go:64] FLAG: --seccomp-default="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995368 4865 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995373 4865 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995377 4865 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995384 4865 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995401 4865 flags.go:64] FLAG: --storage-driver-password="root" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995405 4865 flags.go:64] FLAG: --storage-driver-secure="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995410 4865 flags.go:64] FLAG: --storage-driver-table="stats" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995414 4865 flags.go:64] FLAG: --storage-driver-user="root" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995419 4865 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995423 4865 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995428 4865 flags.go:64] FLAG: --system-cgroups="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995432 4865 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995439 4865 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995443 4865 flags.go:64] FLAG: --tls-cert-file="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995448 4865 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995454 4865 flags.go:64] FLAG: --tls-min-version="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995458 4865 flags.go:64] FLAG: --tls-private-key-file="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995464 4865 flags.go:64] FLAG: --topology-manager-policy="none" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995468 4865 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995473 4865 flags.go:64] FLAG: --topology-manager-scope="container" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995478 4865 flags.go:64] FLAG: --v="2" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995483 4865 flags.go:64] FLAG: --version="false" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995489 4865 flags.go:64] FLAG: --vmodule="" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995495 4865 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 03 04:16:12 crc kubenswrapper[4865]: I0103 04:16:12.995499 4865 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995605 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995611 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995615 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995619 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995624 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995627 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995631 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995635 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995639 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995643 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 04:16:12 crc kubenswrapper[4865]: W0103 04:16:12.995647 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995651 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995655 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995659 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995663 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995666 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995669 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995676 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995681 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995685 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995689 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995692 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995696 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995700 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995703 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995707 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995710 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995714 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995718 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995722 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995727 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995731 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995734 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995738 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995742 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995746 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995750 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995753 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995757 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995762 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995767 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995771 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995775 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995779 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995783 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995787 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995791 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995796 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995799 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995803 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995807 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995811 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995814 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995818 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995821 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995825 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995828 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995833 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995836 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995840 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995844 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995847 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995851 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995854 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995859 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995863 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995867 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995870 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995874 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995877 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:12.995881 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:12.996034 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.006912 4865 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.006954 4865 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007099 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007123 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007132 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007143 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007153 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007163 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007172 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007180 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007189 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007198 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007206 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007214 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007224 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007233 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007242 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007251 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007259 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007267 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007291 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007312 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007321 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007329 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007337 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007345 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007353 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007360 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007368 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007376 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007383 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007419 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007430 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007442 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007451 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007460 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007478 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007489 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007499 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007509 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007517 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007525 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007533 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007544 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007554 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007562 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007571 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007580 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007588 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007597 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007608 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007617 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007626 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007635 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007644 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007653 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007663 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007685 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007694 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007702 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007713 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007722 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007731 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007739 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007747 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007755 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007765 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007773 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007781 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007788 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007797 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007805 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.007813 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.007826 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008163 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008179 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008188 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008196 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008204 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008212 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008220 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008228 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008236 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008244 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008251 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008259 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008267 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008275 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008282 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008290 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008298 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008306 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008313 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008340 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008348 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008356 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008366 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008374 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008415 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008427 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008436 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008445 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008455 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008463 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008472 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008480 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008506 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008515 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008522 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008530 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008537 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008545 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008552 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008560 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008571 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008581 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008589 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008598 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008606 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008614 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008622 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008629 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008637 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008644 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008652 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008660 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008668 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008676 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008684 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008705 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008713 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008721 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008729 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008737 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008744 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008753 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008760 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008768 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008776 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008784 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008792 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008800 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008808 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008816 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.008823 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.008835 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.009285 4865 server.go:940] "Client rotation is on, will bootstrap in background" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.013970 4865 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.014120 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.015077 4865 server.go:997] "Starting client certificate rotation" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.015175 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.015818 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 19:31:45.73168545 +0000 UTC Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.015941 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.024763 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.026946 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.030887 4865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.039842 4865 log.go:25] "Validated CRI v1 runtime API" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.055272 4865 log.go:25] "Validated CRI v1 image API" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.056644 4865 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.059720 4865 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-03-04-09-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.059753 4865 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.075465 4865 manager.go:217] Machine: {Timestamp:2026-01-03 04:16:13.073899266 +0000 UTC m=+0.190952451 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2b92fba2-4500-48df-a5c0-af75c72ccb04 BootID:e7152c83-ea61-42b3-a0b7-284415671ac6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:da:d3:0e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:da:d3:0e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:44:3a:4c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8a:b1:39 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9d:43:b9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:86:35:1e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:a0:8a:98:67:7b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:4a:08:92:f2:4c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.075772 4865 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.075971 4865 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.076740 4865 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.076920 4865 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.076959 4865 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.077375 4865 topology_manager.go:138] "Creating topology manager with none policy" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.077414 4865 container_manager_linux.go:303] "Creating device plugin manager" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.077643 4865 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.077664 4865 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.077884 4865 state_mem.go:36] "Initialized new in-memory state store" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.078198 4865 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.079050 4865 kubelet.go:418] "Attempting to sync node with API server" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.079072 4865 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.079088 4865 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.079101 4865 kubelet.go:324] "Adding apiserver pod source" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.079116 4865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.080834 4865 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.081322 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.081415 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.081487 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.081508 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.081632 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.081989 4865 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.083956 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084001 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084016 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084051 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084303 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084364 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084442 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084490 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084523 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084550 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084584 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.084608 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.085258 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.086880 4865 server.go:1280] "Started kubelet" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.088035 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.088049 4865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.088053 4865 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.089603 4865 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.090606 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.090674 4865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 03 04:16:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.091013 4865 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.091033 4865 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.090945 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:56:45.380978678 +0000 UTC Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.092333 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.093434 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.092572 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.092861 4865 server.go:460] "Adding debug handlers to kubelet server" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.091192 4865 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.091887 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.092927 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18871d7663f939a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 04:16:13.086833064 +0000 UTC m=+0.203886279,LastTimestamp:2026-01-03 04:16:13.086833064 +0000 UTC m=+0.203886279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.104692 4865 factory.go:55] Registering systemd factory Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.105348 4865 factory.go:221] Registration of the systemd container factory successfully Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.106484 4865 factory.go:153] Registering CRI-O factory Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.106600 4865 factory.go:221] Registration of the crio container factory successfully Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.106722 4865 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.106841 4865 factory.go:103] Registering Raw factory Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.106948 4865 manager.go:1196] Started watching for new ooms in manager Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.107738 4865 manager.go:319] Starting recovery of all containers Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120301 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120701 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120768 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120792 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120826 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120849 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120879 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120900 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120939 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120961 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.120980 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121011 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121035 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121072 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121102 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121131 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121157 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121185 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121211 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121236 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121264 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121288 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121319 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121344 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121366 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121427 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121502 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121539 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121572 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121596 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121625 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121653 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121680 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121711 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121734 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121756 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121783 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121843 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121867 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121898 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121931 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121959 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.121980 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122005 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122103 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122202 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122286 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122331 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122376 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122476 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122529 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122580 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122643 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122732 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122785 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122846 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122913 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.122959 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.125939 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.125994 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126020 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126041 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126063 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126084 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126104 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126124 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126145 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126166 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126187 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126210 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126230 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126250 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126269 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126289 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126308 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126329 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126351 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126373 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126422 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126444 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126466 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126484 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126505 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126527 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126545 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126567 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126642 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126665 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126685 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126704 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126724 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126753 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126779 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126807 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126831 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126864 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126891 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126916 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126946 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126973 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.126999 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127033 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127055 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127085 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127123 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127145 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127167 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127190 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127213 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127233 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127253 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127274 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127294 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127316 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127338 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127356 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.127385 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128314 4865 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128360 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128482 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128509 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128561 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128583 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128605 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128628 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128658 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128687 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128707 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128726 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128746 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128766 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128786 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128805 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128826 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128846 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128868 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128890 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128909 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128928 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128948 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128968 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.128989 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129008 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129029 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129049 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129068 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129090 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129110 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129131 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129151 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129171 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129191 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129209 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129231 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129251 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129273 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129603 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129626 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129646 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129665 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129691 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129739 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129767 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129796 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129822 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129848 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129872 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129898 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129923 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129951 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129971 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.129991 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130010 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130033 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130051 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130076 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130102 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130129 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130152 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130184 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130258 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130280 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130300 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130321 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130342 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130364 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130419 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130441 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130463 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130485 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130504 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130524 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130544 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130566 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130584 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130606 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130627 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130647 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130666 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130688 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130708 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130727 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130748 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130768 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130788 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130810 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130830 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130903 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130924 4865 reconstruct.go:97] "Volume reconstruction finished" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.130938 4865 reconciler.go:26] "Reconciler: start to sync state" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.135689 4865 manager.go:324] Recovery completed Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.146820 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.148579 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.148633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.148648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.149330 4865 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.149353 4865 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.149409 4865 state_mem.go:36] "Initialized new in-memory state store" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.151177 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.154266 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.154338 4865 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.154433 4865 kubelet.go:2335] "Starting kubelet main sync loop" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.154537 4865 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.194142 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.195924 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.196033 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.207651 4865 policy_none.go:49] "None policy: Start" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.208990 4865 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.209042 4865 state_mem.go:35] "Initializing new in-memory state store" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.254715 4865 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.276368 4865 manager.go:334] "Starting Device Plugin manager" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.276444 4865 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.276458 4865 server.go:79] "Starting device plugin registration server" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.276887 4865 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.276919 4865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.277124 4865 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.277328 4865 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.277364 4865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.289103 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.294187 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.377798 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.379476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.379515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.379529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.379554 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.379969 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.455418 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.455749 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.457792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.457828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.457863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.458045 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.458472 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.458553 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459726 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.459972 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.460070 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.460121 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461906 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.461939 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.463873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.463913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.463931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464133 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464065 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.464168 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.465340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.465443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.465470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.466506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.466562 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.466587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.466883 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.466945 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.468194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.468228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.468239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540297 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540370 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540665 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540709 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540751 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540776 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540820 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.540842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.580510 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.581937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.582001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.582024 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.582065 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.582779 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642599 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642670 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642814 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642861 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642912 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642751 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642988 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642948 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643035 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.642961 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643037 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643037 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.643560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.695828 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.808335 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.831886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.837428 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.841652 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-028c729d71e4933a0de65410d439763bed17410f53554c1052722317a1c73104 WatchSource:0}: Error finding container 028c729d71e4933a0de65410d439763bed17410f53554c1052722317a1c73104: Status 404 returned error can't find the container with id 028c729d71e4933a0de65410d439763bed17410f53554c1052722317a1c73104 Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.855909 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.861584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.862003 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-057ec2e94007e9732092d67246cb2777dd69ff6e0082cd6ff62d64dc7cb150f7 WatchSource:0}: Error finding container 057ec2e94007e9732092d67246cb2777dd69ff6e0082cd6ff62d64dc7cb150f7: Status 404 returned error can't find the container with id 057ec2e94007e9732092d67246cb2777dd69ff6e0082cd6ff62d64dc7cb150f7 Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.864446 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5601712b5d44fb48f81a2a8e2482e46095e42feef1b52b6cdd35d6f24f1389dc WatchSource:0}: Error finding container 5601712b5d44fb48f81a2a8e2482e46095e42feef1b52b6cdd35d6f24f1389dc: Status 404 returned error can't find the container with id 5601712b5d44fb48f81a2a8e2482e46095e42feef1b52b6cdd35d6f24f1389dc Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.888674 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-fd223282b301a937140d5b45d46aef41fbe3ae0eec49ac78e62c65dd10e242e5 WatchSource:0}: Error finding container fd223282b301a937140d5b45d46aef41fbe3ae0eec49ac78e62c65dd10e242e5: Status 404 returned error can't find the container with id fd223282b301a937140d5b45d46aef41fbe3ae0eec49ac78e62c65dd10e242e5 Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.900026 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c200b146aa14a8f3e155a7fbfc03a8a3abacbb93c512e93800862575fedcb501 WatchSource:0}: Error finding container c200b146aa14a8f3e155a7fbfc03a8a3abacbb93c512e93800862575fedcb501: Status 404 returned error can't find the container with id c200b146aa14a8f3e155a7fbfc03a8a3abacbb93c512e93800862575fedcb501 Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.932572 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.932745 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: W0103 04:16:13.941279 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.941447 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.983728 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.985283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.985362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.985427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:13 crc kubenswrapper[4865]: I0103 04:16:13.985482 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:13 crc kubenswrapper[4865]: E0103 04:16:13.986164 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.089643 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.093738 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:59:55.568356857 +0000 UTC Jan 03 04:16:14 crc kubenswrapper[4865]: W0103 04:16:14.108213 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:14 crc kubenswrapper[4865]: E0103 04:16:14.108333 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.159304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5601712b5d44fb48f81a2a8e2482e46095e42feef1b52b6cdd35d6f24f1389dc"} Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.161561 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"057ec2e94007e9732092d67246cb2777dd69ff6e0082cd6ff62d64dc7cb150f7"} Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.162824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"028c729d71e4933a0de65410d439763bed17410f53554c1052722317a1c73104"} Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.165235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c200b146aa14a8f3e155a7fbfc03a8a3abacbb93c512e93800862575fedcb501"} Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.167243 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd223282b301a937140d5b45d46aef41fbe3ae0eec49ac78e62c65dd10e242e5"} Jan 03 04:16:14 crc kubenswrapper[4865]: W0103 04:16:14.290831 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:14 crc kubenswrapper[4865]: E0103 04:16:14.291218 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:14 crc kubenswrapper[4865]: E0103 04:16:14.497245 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.787273 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.790264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.790368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.790430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:14 crc kubenswrapper[4865]: I0103 04:16:14.790477 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:14 crc kubenswrapper[4865]: E0103 04:16:14.791231 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.089227 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.094556 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:56:00.698711675 +0000 UTC Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.172771 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6" exitCode=0 Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.172917 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.172876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.174287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.174339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.174358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.176094 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.176126 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.178149 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d" exitCode=0 Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.178260 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.178275 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.179590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.179639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.179651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.181469 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04" exitCode=0 Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.181525 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.181578 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.181619 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.182623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.182675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.182693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184799 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b53aaba1c41647fe3c95fe99cad6b8d1b1216fa7a0daa76e094d6210ac94f90e" exitCode=0 Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184841 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b53aaba1c41647fe3c95fe99cad6b8d1b1216fa7a0daa76e094d6210ac94f90e"} Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.184928 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.185938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.185978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.185990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:15 crc kubenswrapper[4865]: I0103 04:16:15.200603 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 04:16:15 crc kubenswrapper[4865]: E0103 04:16:15.202131 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:15 crc kubenswrapper[4865]: E0103 04:16:15.885714 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18871d7663f939a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 04:16:13.086833064 +0000 UTC m=+0.203886279,LastTimestamp:2026-01-03 04:16:13.086833064 +0000 UTC m=+0.203886279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 04:16:15 crc kubenswrapper[4865]: W0103 04:16:15.983580 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:15 crc kubenswrapper[4865]: E0103 04:16:15.983680 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.091540 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.094759 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:03:52.349656294 +0000 UTC Jan 03 04:16:16 crc kubenswrapper[4865]: E0103 04:16:16.098049 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.193730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.193807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.197331 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.197470 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.198560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.198600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.198613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.199938 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d96dbb0a0839b9c8df0a06ae78ec24a0d462c5b36b2d4bc8d50c09f0bd1ffc2d" exitCode=0 Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.200030 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d96dbb0a0839b9c8df0a06ae78ec24a0d462c5b36b2d4bc8d50c09f0bd1ffc2d"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.200188 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.201288 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.201333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.201342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.203632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.203671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.208671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.208724 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.208893 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c"} Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.209346 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.209381 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.209464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.392220 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.394461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.394497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.394505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:16 crc kubenswrapper[4865]: I0103 04:16:16.394528 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.095504 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:40:07.83335112 +0000 UTC Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.221996 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="62606efcf011d5bfd09547bdc6007a6495ce432336309c7ff19bdc53f20516ec" exitCode=0 Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.222127 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"62606efcf011d5bfd09547bdc6007a6495ce432336309c7ff19bdc53f20516ec"} Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.222191 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.223742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.223808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.223826 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.227439 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7"} Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.227544 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.228817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.228874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.228897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e"} Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288"} Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a"} Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235500 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235557 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.235963 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237315 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.237375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.238309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.238499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.238529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:17 crc kubenswrapper[4865]: I0103 04:16:17.387060 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.096299 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:30:06.789387379 +0000 UTC Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.096435 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 109h13m48.69295779s for next certificate rotation Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248674 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"692700db0881e846808bff753ba2556dcefd7ea95234d138ba915f8f3d6a0b93"} Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248751 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248776 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3bd4ba13421fa8feb6a5678c6749b24f74cee09d6f569d326437d9bea501eef2"} Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248710 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248838 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248896 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.248828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"517af9b81bdcc9c930f7375f5da0995535e52a5ce71831c8458f8e28cc1463f0"} Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.250873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.250925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.250947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.250881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.251047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.251071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.251492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.251552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.251574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.310963 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:18 crc kubenswrapper[4865]: I0103 04:16:18.566444 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258313 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d6db8970413843be3801dd9d0246559d75d987240a9ee950ec1e9c787a4b23ee"} Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258365 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258420 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"784edd351131ab05a02130d9a9c66ff265bdd2c93f99928d8e7f2e3e8691a823"} Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258443 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258514 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.258669 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.260495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.261939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.261988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.262006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.371772 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.379747 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.410914 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 04:16:19 crc kubenswrapper[4865]: I0103 04:16:19.838020 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.261748 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.261802 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.267479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.267539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.267567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.267847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.268106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.268128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.387652 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.387769 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.408378 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.408646 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.410674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.410765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:20 crc kubenswrapper[4865]: I0103 04:16:20.410792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.264021 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.264768 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.265568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.365482 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.365723 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.367310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.367354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.367370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:21 crc kubenswrapper[4865]: I0103 04:16:21.901030 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.266778 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.267626 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.267661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.267674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.679240 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.679896 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.681334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.681370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:22 crc kubenswrapper[4865]: I0103 04:16:22.681410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:23 crc kubenswrapper[4865]: E0103 04:16:23.289277 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 04:16:26 crc kubenswrapper[4865]: E0103 04:16:26.396169 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 03 04:16:26 crc kubenswrapper[4865]: W0103 04:16:26.641079 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 03 04:16:26 crc kubenswrapper[4865]: I0103 04:16:26.641237 4865 trace.go:236] Trace[932642520]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 04:16:16.639) (total time: 10001ms): Jan 03 04:16:26 crc kubenswrapper[4865]: Trace[932642520]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (04:16:26.641) Jan 03 04:16:26 crc kubenswrapper[4865]: Trace[932642520]: [10.001356218s] [10.001356218s] END Jan 03 04:16:26 crc kubenswrapper[4865]: E0103 04:16:26.641276 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 03 04:16:26 crc kubenswrapper[4865]: I0103 04:16:26.710441 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 04:16:26 crc kubenswrapper[4865]: I0103 04:16:26.710520 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 04:16:26 crc kubenswrapper[4865]: I0103 04:16:26.717325 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 04:16:26 crc kubenswrapper[4865]: I0103 04:16:26.717437 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.316900 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.317104 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.318552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.318585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.318608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.567356 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 04:16:28 crc kubenswrapper[4865]: I0103 04:16:28.567476 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.596848 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.599265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.599314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.599331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.599363 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:29 crc kubenswrapper[4865]: E0103 04:16:29.604692 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.871855 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.872065 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.873751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.873991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.874167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:29 crc kubenswrapper[4865]: I0103 04:16:29.889470 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.291274 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.292544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.292615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.292637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.388713 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 04:16:30 crc kubenswrapper[4865]: I0103 04:16:30.388832 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.373171 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.373461 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.374054 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.374110 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.375309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.375378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.375422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.380519 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:31 crc kubenswrapper[4865]: E0103 04:16:31.697553 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.700503 4865 trace.go:236] Trace[1577573372]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 04:16:16.947) (total time: 14753ms): Jan 03 04:16:31 crc kubenswrapper[4865]: Trace[1577573372]: ---"Objects listed" error: 14753ms (04:16:31.700) Jan 03 04:16:31 crc kubenswrapper[4865]: Trace[1577573372]: [14.753125528s] [14.753125528s] END Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.700526 4865 trace.go:236] Trace[111393635]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 04:16:16.956) (total time: 14743ms): Jan 03 04:16:31 crc kubenswrapper[4865]: Trace[111393635]: ---"Objects listed" error: 14743ms (04:16:31.700) Jan 03 04:16:31 crc kubenswrapper[4865]: Trace[111393635]: [14.743890462s] [14.743890462s] END Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.700573 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.700533 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.701164 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.701935 4865 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.720847 4865 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.746169 4865 csr.go:261] certificate signing request csr-rmf2x is approved, waiting to be issued Jan 03 04:16:31 crc kubenswrapper[4865]: I0103 04:16:31.752584 4865 csr.go:257] certificate signing request csr-rmf2x is issued Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.085545 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.090154 4865 apiserver.go:52] "Watching apiserver" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.095457 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.095722 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.096121 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.096217 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.096280 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.096362 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.096408 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.096598 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.096690 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.097195 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.097244 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.098546 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.100252 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101015 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101059 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101323 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101352 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101641 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.101837 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.124623 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.137491 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.146559 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.158214 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.174815 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.194521 4865 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203335 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203416 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203481 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203509 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203564 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203587 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203607 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203641 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203670 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203695 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203720 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203743 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203794 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203908 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203959 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203962 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.203984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204066 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204084 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204207 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204282 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204317 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204351 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204418 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204453 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204285 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204457 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204534 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204602 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204772 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204943 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204960 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204983 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.205880 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.205994 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206065 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206095 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206077 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206261 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206289 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206318 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206353 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206674 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206709 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.206955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.204549 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207026 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207077 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207108 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207036 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207104 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207144 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207297 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207315 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207367 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207458 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207604 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207606 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207656 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207704 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207743 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207790 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207884 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207914 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207944 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207968 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208076 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208118 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207709 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208170 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207730 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208208 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207707 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.207966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208017 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208023 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208127 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208242 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208460 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208531 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208582 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208635 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208738 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208839 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208894 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209039 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209089 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209149 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209199 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209259 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209320 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209366 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209660 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209702 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209745 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209920 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209981 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210034 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210077 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210125 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210162 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210249 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210300 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210352 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208310 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208407 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208559 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208616 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208596 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.208822 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209043 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209220 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209254 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209410 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.209911 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210255 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210424 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210803 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210837 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210877 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210922 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.210974 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211001 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211019 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211062 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211170 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211207 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211253 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211305 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211354 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211437 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211498 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211551 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211600 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211698 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.212138 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.212840 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.212920 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.212983 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213036 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213081 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213126 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213177 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213220 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213263 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213361 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213419 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213553 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213593 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213633 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213679 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213732 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213779 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213816 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213865 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.213977 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.217785 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211443 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211508 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211722 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.211851 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.216646 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.216977 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.217322 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.218069 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.218110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.218309 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.218333 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.218676 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.219298 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.219650 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.220100 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.222061 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.223747 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.224117 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.224326 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.224522 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.225272 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.225462 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.225641 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.226140 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.226483 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.226725 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.226822 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.226831 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.225329 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.227412 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.225117 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.227435 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.228634 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.228679 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.228703 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.228557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.229624 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.230288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.239526 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.239998 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.240443 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.240469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.240192 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.243364 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.243729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.243859 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.244044 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.244470 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.244951 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.230209 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.245441 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.245806 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.246252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.246535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.247909 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.248972 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.249227 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.249595 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.249944 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250578 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250860 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250912 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250921 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250957 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.250983 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251015 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251078 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251103 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251132 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251160 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251184 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251215 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251245 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251250 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251682 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251862 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.251978 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253973 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254049 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254136 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254162 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254188 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254211 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254229 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254251 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254273 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254291 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254314 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254334 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254355 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254374 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254410 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254475 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254496 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254515 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254536 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254585 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254635 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254674 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254695 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254716 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254748 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254765 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254783 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254821 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254839 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254874 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254893 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254913 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254933 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254950 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254971 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.254990 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255008 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255026 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255045 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255121 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255138 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255200 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255225 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255253 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255300 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255317 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255396 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255474 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255817 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255836 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255931 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255949 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255963 4865 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255973 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255983 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.255999 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256014 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256025 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256036 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256044 4865 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256057 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256067 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256076 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256086 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256113 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256123 4865 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256132 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256144 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256153 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256162 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256172 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.256183 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.252142 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.252625 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253218 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253325 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253502 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253445 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253692 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.253754 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.287989 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.287269 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.287420 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:16:32.787397277 +0000 UTC m=+19.904450462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.287666 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.287840 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.287891 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288077 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288097 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288117 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288156 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288172 4865 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288189 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288208 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288224 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288241 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288258 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288397 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.288650 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289058 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289268 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289707 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289862 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.289883 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290238 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290443 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290497 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290589 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290616 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290768 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.290943 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.291114 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.291275 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.291455 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.291638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.291903 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.292049 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.292218 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.292368 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.292731 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.292909 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.293000 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.293017 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.293620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.294841 4865 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.294972 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.295358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.295481 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.295645 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.295896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.296245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.296531 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.296859 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297176 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297488 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297625 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297646 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297663 4865 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297678 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297694 4865 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297710 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297723 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.297735 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.298025 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.298778 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.299016 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.299499 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.299795 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.301742 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.302679 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.302932 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.303781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.305292 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.307494 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308152 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308274 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308307 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308324 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308339 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308354 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308368 4865 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308404 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308419 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308433 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308447 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308464 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308478 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308490 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308503 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308516 4865 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308531 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308545 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308559 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308573 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308586 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308598 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308611 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308625 4865 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308638 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308652 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308665 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308679 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308692 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308708 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308724 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308740 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308754 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308767 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308782 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308795 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.308827 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:32.808801835 +0000 UTC m=+19.925855010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308842 4865 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308856 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308869 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.308885 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:32.808877227 +0000 UTC m=+19.925930412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308899 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308914 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308929 4865 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308942 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308956 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308970 4865 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308982 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.308994 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309004 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309013 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309022 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309031 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309052 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309062 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309072 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309081 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309092 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309101 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309113 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309125 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309135 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309144 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309153 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309163 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309171 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309181 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309191 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309202 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309212 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309222 4865 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309232 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309240 4865 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309249 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309258 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309267 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309288 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309297 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309305 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309314 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309323 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309332 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.309341 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.311844 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.311866 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.311895 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.311965 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:32.811933111 +0000 UTC m=+19.928986296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.312300 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.312667 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.313566 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.313687 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.314249 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.314487 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.314558 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.314929 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.315604 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.315953 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.314684 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.317405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.317629 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e" exitCode=255 Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.317671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e"} Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.319007 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.319053 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.321002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.321102 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.321353 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.326637 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.326665 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.326683 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.326735 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:32.826717618 +0000 UTC m=+19.943770993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.328053 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.330297 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.333273 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.334187 4865 scope.go:117] "RemoveContainer" containerID="6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.334293 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.336831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.337611 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.342081 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.343301 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.347373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.354326 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.365833 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.375961 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.385277 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.392787 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410611 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410667 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410680 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410691 4865 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410700 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410727 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410740 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410748 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410757 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410765 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410773 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410783 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410812 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410821 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410829 4865 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410837 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410845 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410853 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410861 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410917 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410926 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410936 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410944 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.410712 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411009 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411021 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411030 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411055 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411064 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411150 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411162 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411172 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411204 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411213 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411222 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411231 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411240 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411251 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411281 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411291 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411300 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411309 4865 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411319 4865 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411327 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411366 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411387 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411397 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411463 4865 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411473 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411481 4865 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411489 4865 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411498 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411507 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411538 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411548 4865 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411556 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411564 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411572 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411583 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411591 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411618 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411628 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411638 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411647 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411658 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411666 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411693 4865 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411703 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411711 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411719 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411729 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411739 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411748 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411774 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411783 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.411791 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.418464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.426460 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 04:16:32 crc kubenswrapper[4865]: W0103 04:16:32.433619 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-461ec57318d299723af21339f9b74b27ac6f4f5002d13f66ac464c04a2b78875 WatchSource:0}: Error finding container 461ec57318d299723af21339f9b74b27ac6f4f5002d13f66ac464c04a2b78875: Status 404 returned error can't find the container with id 461ec57318d299723af21339f9b74b27ac6f4f5002d13f66ac464c04a2b78875 Jan 03 04:16:32 crc kubenswrapper[4865]: W0103 04:16:32.443058 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4e976af4a32f748d690340ff85bd247c5d7e9663076f8f36222fc3716c0f087b WatchSource:0}: Error finding container 4e976af4a32f748d690340ff85bd247c5d7e9663076f8f36222fc3716c0f087b: Status 404 returned error can't find the container with id 4e976af4a32f748d690340ff85bd247c5d7e9663076f8f36222fc3716c0f087b Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.711794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 04:16:32 crc kubenswrapper[4865]: W0103 04:16:32.726971 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8f3cbf79643a611e2f8483b95fc36e7b205f21603f75e9a2fac83d84d0404e95 WatchSource:0}: Error finding container 8f3cbf79643a611e2f8483b95fc36e7b205f21603f75e9a2fac83d84d0404e95: Status 404 returned error can't find the container with id 8f3cbf79643a611e2f8483b95fc36e7b205f21603f75e9a2fac83d84d0404e95 Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.753875 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-03 04:11:31 +0000 UTC, rotation deadline is 2026-11-25 23:16:50.699385626 +0000 UTC Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.753918 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7843h0m17.94547084s for next certificate rotation Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.815445 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.815520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.815554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.815583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815688 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815754 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:33.815717634 +0000 UTC m=+20.932770819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815805 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:16:33.815798857 +0000 UTC m=+20.932852042 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815859 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815887 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:33.815881079 +0000 UTC m=+20.932934264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.815968 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.816007 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.816018 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.816043 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:33.816036813 +0000 UTC m=+20.933089998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: I0103 04:16:32.916497 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.916632 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.916646 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.916657 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:32 crc kubenswrapper[4865]: E0103 04:16:32.916705 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:33.916690509 +0000 UTC m=+21.033743694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.016328 4865 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.016532 4865 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.016602 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.196:56258->38.102.83.196:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18871d769292aed9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 04:16:13.868642009 +0000 UTC m=+0.985695234,LastTimestamp:2026-01-03 04:16:13.868642009 +0000 UTC m=+0.985695234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.016730 4865 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017041 4865 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017097 4865 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017116 4865 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017135 4865 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017429 4865 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017460 4865 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017479 4865 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.017623 4865 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.161686 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.162836 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.164806 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.165890 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.167338 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.168032 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.168896 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.170004 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.170631 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.171515 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.172855 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.173813 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.175361 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.176273 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.177459 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.178310 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.179819 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.180773 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.182069 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.183019 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.183964 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.185343 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.185326 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.186404 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.187737 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.188898 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.189867 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.191727 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.192841 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.194216 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.194825 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.195770 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.196269 4865 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.196436 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.198505 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.199017 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.199560 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.201113 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.202227 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.202219 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.203009 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.204018 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.204746 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.205646 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.206276 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.207313 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.208343 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.208869 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.209793 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.210373 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.211745 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.212306 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.212831 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.213715 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.214290 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.215424 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.215919 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.221822 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.233785 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.254175 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.269139 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.322934 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.325072 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.327799 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.330252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.330298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8f3cbf79643a611e2f8483b95fc36e7b205f21603f75e9a2fac83d84d0404e95"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.331797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4e976af4a32f748d690340ff85bd247c5d7e9663076f8f36222fc3716c0f087b"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.333614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.333636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.333645 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"461ec57318d299723af21339f9b74b27ac6f4f5002d13f66ac464c04a2b78875"} Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.346807 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.383087 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.424431 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.441422 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.453703 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.465895 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.483102 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.507522 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.521815 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.543101 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.545433 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hfsrf"] Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.545721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.547270 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.547277 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.547635 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.563010 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.588031 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.603817 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.622474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-hosts-file\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.622532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6rj\" (UniqueName: \"kubernetes.io/projected/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-kube-api-access-9k6rj\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.632992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.651906 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.668150 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.679829 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.698287 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.713454 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.723424 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6rj\" (UniqueName: \"kubernetes.io/projected/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-kube-api-access-9k6rj\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.723471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-hosts-file\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.723548 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-hosts-file\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.729761 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.742619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6rj\" (UniqueName: \"kubernetes.io/projected/c6e3abbf-0e02-4745-b0d2-c6995c1e1b95-kube-api-access-9k6rj\") pod \"node-resolver-hfsrf\" (UID: \"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\") " pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.746301 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.777406 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.824056 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.824123 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.824150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.824170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824242 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824293 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:35.824280559 +0000 UTC m=+22.941333744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824295 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824318 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824351 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824361 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824371 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:35.824351261 +0000 UTC m=+22.941404436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824425 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:35.824409473 +0000 UTC m=+22.941462658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.824493 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:16:35.824484315 +0000 UTC m=+22.941537500 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.857795 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hfsrf" Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.925468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.925590 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.925616 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.925626 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:33 crc kubenswrapper[4865]: E0103 04:16:33.925676 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:35.925661495 +0000 UTC m=+23.042714680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:33 crc kubenswrapper[4865]: W0103 04:16:33.960702 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e3abbf_0e02_4745_b0d2_c6995c1e1b95.slice/crio-76d373ded99d28fcf0891cc1e08d190211de32e70ea9bd386b8a06418da31d2f WatchSource:0}: Error finding container 76d373ded99d28fcf0891cc1e08d190211de32e70ea9bd386b8a06418da31d2f: Status 404 returned error can't find the container with id 76d373ded99d28fcf0891cc1e08d190211de32e70ea9bd386b8a06418da31d2f Jan 03 04:16:33 crc kubenswrapper[4865]: I0103 04:16:33.977627 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.131446 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.155451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.155451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.155556 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:34 crc kubenswrapper[4865]: E0103 04:16:34.155574 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:34 crc kubenswrapper[4865]: E0103 04:16:34.155722 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:34 crc kubenswrapper[4865]: E0103 04:16:34.155848 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.160883 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.196534 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.236590 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.326421 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nrhl2"] Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.326905 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.328703 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mh2rc"] Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.329069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.332763 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.333337 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.333597 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.333747 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.335348 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.336037 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nz8q7"] Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.336055 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.336104 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.346262 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.346695 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.346848 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.347127 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.348756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hfsrf" event={"ID":"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95","Type":"ContainerStarted","Data":"4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794"} Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.348789 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hfsrf" event={"ID":"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95","Type":"ContainerStarted","Data":"76d373ded99d28fcf0891cc1e08d190211de32e70ea9bd386b8a06418da31d2f"} Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.350683 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.352657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.389485 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.414441 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.419749 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-system-cni-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428224 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dhp\" (UniqueName: \"kubernetes.io/projected/6570eea8-b60f-43b1-830a-0f6293f571b8-kube-api-access-45dhp\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428253 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-k8s-cni-cncf-io\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-binary-copy\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428289 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-multus-certs\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428307 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/122690aa-cb57-4839-8349-30c5221c5b42-mcd-auth-proxy-config\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428367 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnl4k\" (UniqueName: \"kubernetes.io/projected/122690aa-cb57-4839-8349-30c5221c5b42-kube-api-access-lnl4k\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428428 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-socket-dir-parent\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/122690aa-cb57-4839-8349-30c5221c5b42-proxy-tls\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428544 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-kubelet\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428576 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428660 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-cni-binary-copy\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-multus\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428784 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-conf-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428803 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjgw6\" (UniqueName: \"kubernetes.io/projected/2fadcfb6-a571-4d6b-af2d-da885a478206-kube-api-access-xjgw6\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428828 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-cnibin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428847 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-daemon-config\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428864 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-cnibin\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428888 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-os-release\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428903 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-bin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-hostroot\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.428983 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-etc-kubernetes\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-os-release\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429043 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/122690aa-cb57-4839-8349-30c5221c5b42-rootfs\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429066 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-system-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.429119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-netns\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.448739 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.462162 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.479607 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.491700 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.498202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.505475 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.514847 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.524653 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530419 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-cnibin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-conf-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530467 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjgw6\" (UniqueName: \"kubernetes.io/projected/2fadcfb6-a571-4d6b-af2d-da885a478206-kube-api-access-xjgw6\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-daemon-config\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530500 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-cnibin\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530524 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-os-release\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530539 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-bin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-hostroot\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530557 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-conf-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530588 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-cnibin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530618 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-cnibin\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-bin\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-hostroot\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530600 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-etc-kubernetes\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-etc-kubernetes\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-os-release\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530727 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/122690aa-cb57-4839-8349-30c5221c5b42-rootfs\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530770 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-system-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530826 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-netns\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-system-cni-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530876 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dhp\" (UniqueName: \"kubernetes.io/projected/6570eea8-b60f-43b1-830a-0f6293f571b8-kube-api-access-45dhp\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530897 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-k8s-cni-cncf-io\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530912 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-binary-copy\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-os-release\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530951 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/122690aa-cb57-4839-8349-30c5221c5b42-rootfs\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-multus-certs\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530927 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-multus-certs\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530984 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-socket-dir-parent\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531000 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/122690aa-cb57-4839-8349-30c5221c5b42-proxy-tls\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/122690aa-cb57-4839-8349-30c5221c5b42-mcd-auth-proxy-config\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531029 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnl4k\" (UniqueName: \"kubernetes.io/projected/122690aa-cb57-4839-8349-30c5221c5b42-kube-api-access-lnl4k\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-kubelet\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531082 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-cni-binary-copy\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-multus\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531159 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-system-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-netns\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531211 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-os-release\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531237 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-daemon-config\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531846 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/122690aa-cb57-4839-8349-30c5221c5b42-mcd-auth-proxy-config\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531872 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-binary-copy\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531314 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-kubelet\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531337 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6570eea8-b60f-43b1-830a-0f6293f571b8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.530968 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6570eea8-b60f-43b1-830a-0f6293f571b8-system-cni-dir\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-run-k8s-cni-cncf-io\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531629 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-socket-dir-parent\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-multus-cni-dir\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531371 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2fadcfb6-a571-4d6b-af2d-da885a478206-host-var-lib-cni-multus\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.531971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2fadcfb6-a571-4d6b-af2d-da885a478206-cni-binary-copy\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.534862 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.536583 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/122690aa-cb57-4839-8349-30c5221c5b42-proxy-tls\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.542817 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.545172 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.546440 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.546799 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjgw6\" (UniqueName: \"kubernetes.io/projected/2fadcfb6-a571-4d6b-af2d-da885a478206-kube-api-access-xjgw6\") pod \"multus-nrhl2\" (UID: \"2fadcfb6-a571-4d6b-af2d-da885a478206\") " pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.548199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dhp\" (UniqueName: \"kubernetes.io/projected/6570eea8-b60f-43b1-830a-0f6293f571b8-kube-api-access-45dhp\") pod \"multus-additional-cni-plugins-nz8q7\" (UID: \"6570eea8-b60f-43b1-830a-0f6293f571b8\") " pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.554748 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnl4k\" (UniqueName: \"kubernetes.io/projected/122690aa-cb57-4839-8349-30c5221c5b42-kube-api-access-lnl4k\") pod \"machine-config-daemon-mh2rc\" (UID: \"122690aa-cb57-4839-8349-30c5221c5b42\") " pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.557848 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.569511 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.579860 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.589803 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.602990 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.613516 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.624743 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.634499 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.647362 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nrhl2" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.648886 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.656129 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:16:34 crc kubenswrapper[4865]: W0103 04:16:34.660055 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fadcfb6_a571_4d6b_af2d_da885a478206.slice/crio-217940dd20c8545be23c516db5ac13371d00b05ee63d420f20b0fd2d94c5c90e WatchSource:0}: Error finding container 217940dd20c8545be23c516db5ac13371d00b05ee63d420f20b0fd2d94c5c90e: Status 404 returned error can't find the container with id 217940dd20c8545be23c516db5ac13371d00b05ee63d420f20b0fd2d94c5c90e Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.660992 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" Jan 03 04:16:34 crc kubenswrapper[4865]: W0103 04:16:34.671208 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod122690aa_cb57_4839_8349_30c5221c5b42.slice/crio-6d336944ce6a7d786a959c5fcd5b90c3a6032a9f7e5d7c0d8ec2336cdea8a7ae WatchSource:0}: Error finding container 6d336944ce6a7d786a959c5fcd5b90c3a6032a9f7e5d7c0d8ec2336cdea8a7ae: Status 404 returned error can't find the container with id 6d336944ce6a7d786a959c5fcd5b90c3a6032a9f7e5d7c0d8ec2336cdea8a7ae Jan 03 04:16:34 crc kubenswrapper[4865]: W0103 04:16:34.672551 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6570eea8_b60f_43b1_830a_0f6293f571b8.slice/crio-9cac5cb544286d7a2a864a482e2133bd1f4b9a532f621f61b47747eb0435f8d5 WatchSource:0}: Error finding container 9cac5cb544286d7a2a864a482e2133bd1f4b9a532f621f61b47747eb0435f8d5: Status 404 returned error can't find the container with id 9cac5cb544286d7a2a864a482e2133bd1f4b9a532f621f61b47747eb0435f8d5 Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.740713 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jvxfl"] Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.741464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.743314 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.743615 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.744004 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.744205 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.744323 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.744474 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.744591 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.756205 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.771843 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.786569 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.798815 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.812033 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.824689 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835159 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835244 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835271 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835331 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhlg\" (UniqueName: \"kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835445 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835466 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835492 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835607 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835747 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835772 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835826 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.835847 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.836059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.842992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.859264 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.876653 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.890264 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.901082 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.915598 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:34Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.936997 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937050 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937076 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937097 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937133 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937155 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937169 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937119 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937249 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937334 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937480 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937531 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937570 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937568 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937612 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937622 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937637 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937659 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937660 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937695 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937708 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937717 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937742 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhlg\" (UniqueName: \"kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937698 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937785 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937791 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937808 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.937810 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.938327 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.938431 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.943063 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:34 crc kubenswrapper[4865]: I0103 04:16:34.957192 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhlg\" (UniqueName: \"kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg\") pod \"ovnkube-node-jvxfl\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.061605 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:35 crc kubenswrapper[4865]: W0103 04:16:35.074129 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod226b5379_0cbe_42e6_b5af_917a5e4b734d.slice/crio-f78370ef63a51344507d3580af1fbd3a0e32470e8708dd946def0d1e156b59d4 WatchSource:0}: Error finding container f78370ef63a51344507d3580af1fbd3a0e32470e8708dd946def0d1e156b59d4: Status 404 returned error can't find the container with id f78370ef63a51344507d3580af1fbd3a0e32470e8708dd946def0d1e156b59d4 Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.355880 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.355939 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.355951 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"6d336944ce6a7d786a959c5fcd5b90c3a6032a9f7e5d7c0d8ec2336cdea8a7ae"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.358168 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6" exitCode=0 Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.358255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.358358 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerStarted","Data":"9cac5cb544286d7a2a864a482e2133bd1f4b9a532f621f61b47747eb0435f8d5"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.360687 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.364443 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" exitCode=0 Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.364557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.364662 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"f78370ef63a51344507d3580af1fbd3a0e32470e8708dd946def0d1e156b59d4"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.379600 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerStarted","Data":"45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.379672 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerStarted","Data":"217940dd20c8545be23c516db5ac13371d00b05ee63d420f20b0fd2d94c5c90e"} Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.380436 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.406699 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.426170 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.437908 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.454714 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.473665 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.501566 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.522032 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.536124 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.548681 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.562071 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.574398 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.586895 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.603902 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.619411 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.630817 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.643581 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.655440 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.669753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.683017 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.700653 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.717245 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.733711 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.748325 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:35Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.845119 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.845246 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.845281 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845320 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:16:39.845284524 +0000 UTC m=+26.962337749 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845351 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845398 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845442 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:39.845411787 +0000 UTC m=+26.962465102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.845474 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845566 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:39.845555781 +0000 UTC m=+26.962608966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845663 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845701 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845725 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.845800 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:39.845778767 +0000 UTC m=+26.962832012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:35 crc kubenswrapper[4865]: I0103 04:16:35.946749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.946964 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.947203 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.947216 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:35 crc kubenswrapper[4865]: E0103 04:16:35.947260 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:39.947248406 +0000 UTC m=+27.064301601 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.005310 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.007768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.007805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.007817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.007931 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.023064 4865 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.023436 4865 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.025961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.026010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.026027 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.026060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.026078 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.061156 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.066575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.066614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.066628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.066648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.066663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.080300 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.085505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.085549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.085559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.085583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.085594 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.099351 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.105876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.105911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.105921 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.105936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.105945 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.118931 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.124640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.124688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.124699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.124722 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.124736 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.139951 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.140107 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.142099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.142124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.142145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.142159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.142168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.154982 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.154998 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.155006 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.155099 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.155258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:36 crc kubenswrapper[4865]: E0103 04:16:36.155464 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.246648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.247163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.247185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.247215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.247245 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.259993 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4m4gh"] Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.260540 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.262522 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.263331 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.264260 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.268612 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.278834 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.292008 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.303177 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.320490 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.338114 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.349669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.349789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.349874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.349979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.350073 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.351829 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad15b416-c3c0-45d7-a36d-974f798313fb-serviceca\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.351875 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad15b416-c3c0-45d7-a36d-974f798313fb-host\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.351938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdknp\" (UniqueName: \"kubernetes.io/projected/ad15b416-c3c0-45d7-a36d-974f798313fb-kube-api-access-hdknp\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.353569 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.371556 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.389688 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.392198 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.392259 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.392277 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.392287 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.392295 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.395179 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38" exitCode=0 Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.395212 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.415931 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.433507 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.452609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdknp\" (UniqueName: \"kubernetes.io/projected/ad15b416-c3c0-45d7-a36d-974f798313fb-kube-api-access-hdknp\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.452723 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad15b416-c3c0-45d7-a36d-974f798313fb-serviceca\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.452763 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad15b416-c3c0-45d7-a36d-974f798313fb-host\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453193 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453691 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.453957 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad15b416-c3c0-45d7-a36d-974f798313fb-host\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.454541 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad15b416-c3c0-45d7-a36d-974f798313fb-serviceca\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.470854 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.483795 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.484281 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdknp\" (UniqueName: \"kubernetes.io/projected/ad15b416-c3c0-45d7-a36d-974f798313fb-kube-api-access-hdknp\") pod \"node-ca-4m4gh\" (UID: \"ad15b416-c3c0-45d7-a36d-974f798313fb\") " pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.503062 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.517000 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.536516 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.553510 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.555929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.555953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.555962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.555978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.555987 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.573408 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.592309 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4m4gh" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.602844 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: W0103 04:16:36.604688 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad15b416_c3c0_45d7_a36d_974f798313fb.slice/crio-5961b3b1b0b2a0a9e10b3cf4c87ed6d55ebe54029443890abc604cbb0932f236 WatchSource:0}: Error finding container 5961b3b1b0b2a0a9e10b3cf4c87ed6d55ebe54029443890abc604cbb0932f236: Status 404 returned error can't find the container with id 5961b3b1b0b2a0a9e10b3cf4c87ed6d55ebe54029443890abc604cbb0932f236 Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.624997 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.649341 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.662209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.662243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.662254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.662267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.662275 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.667408 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.697622 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.716733 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.731519 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.743576 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:36Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.764087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.764127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.764136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.764152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.764163 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.866908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.867307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.867320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.867335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.867347 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.970352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.970415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.970425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.970441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:36 crc kubenswrapper[4865]: I0103 04:16:36.970452 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:36Z","lastTransitionTime":"2026-01-03T04:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.072837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.072886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.072897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.072914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.072925 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.175230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.175265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.175273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.175289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.175303 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.277003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.277047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.277056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.277074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.277086 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.379746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.379789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.379801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.379816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.379827 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.391134 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.395415 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.400939 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5" exitCode=0 Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.401050 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.402893 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4m4gh" event={"ID":"ad15b416-c3c0-45d7-a36d-974f798313fb","Type":"ContainerStarted","Data":"18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.402926 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4m4gh" event={"ID":"ad15b416-c3c0-45d7-a36d-974f798313fb","Type":"ContainerStarted","Data":"5961b3b1b0b2a0a9e10b3cf4c87ed6d55ebe54029443890abc604cbb0932f236"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.403201 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.403420 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.407417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.430906 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.445240 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.458661 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.476862 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.488337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.488364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.488374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.488414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.488426 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.495611 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.506671 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.523159 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.547040 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.563909 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.577476 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.589882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.590980 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.591025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.591036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.591056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.591080 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.601199 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.616039 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.635677 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.649266 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.663285 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.675914 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.687486 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.693336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.693405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.693416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.693433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.693445 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.702529 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.714465 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.725997 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.736597 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.750875 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.765080 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.784598 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.795562 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.795603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.795612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.795629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.795638 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.798258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:37Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.898542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.898588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.898600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.898629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:37 crc kubenswrapper[4865]: I0103 04:16:37.898643 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:37Z","lastTransitionTime":"2026-01-03T04:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.003902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.004229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.004429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.004832 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.005093 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.107924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.107964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.108026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.108044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.108056 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.155109 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.155123 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:38 crc kubenswrapper[4865]: E0103 04:16:38.155279 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.155126 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:38 crc kubenswrapper[4865]: E0103 04:16:38.155420 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:38 crc kubenswrapper[4865]: E0103 04:16:38.155514 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.210555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.210593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.210606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.210626 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.210638 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.312756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.312945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.313007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.313070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.313143 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.413167 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134" exitCode=0 Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.413249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.415367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.415426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.415442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.415457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.415468 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: E0103 04:16:38.420984 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.426984 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.441708 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.454651 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.465642 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.482327 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.497349 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.512090 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.518016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.518057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.518070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.518086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.518097 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.524338 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.540564 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.558160 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.575664 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.585992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.595230 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.605949 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.620410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.620455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.620468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.620488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.620501 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.722873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.722925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.722937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.722956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.722972 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.825192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.825243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.825254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.825273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.825288 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.928268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.928299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.928309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.928322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:38 crc kubenswrapper[4865]: I0103 04:16:38.928331 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:38Z","lastTransitionTime":"2026-01-03T04:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.031692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.031743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.031753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.031768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.031778 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.135866 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.136342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.136361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.136436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.136461 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.239757 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.239820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.239837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.239860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.239879 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.346608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.346663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.346680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.346702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.346719 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.421103 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.423589 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62" exitCode=0 Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.424370 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.460185 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.465718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.465773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.465789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.465812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.465827 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.484867 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.503794 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.520267 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.535760 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.551427 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.568432 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.570275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.570309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.570321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.570336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.570347 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.583863 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.597184 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.611011 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.627988 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.643764 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.699835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.699890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.699907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.699929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.699945 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.704063 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.723000 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:39Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.803156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.803210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.803227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.803249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.803267 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.900520 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.900724 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.900805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.900888 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901103 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901143 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901170 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901283 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:47.901252327 +0000 UTC m=+35.018305552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901512 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901553 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:16:47.901527255 +0000 UTC m=+35.018580440 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901572 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901608 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:47.901588186 +0000 UTC m=+35.018641411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: E0103 04:16:39.901632 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:47.901620207 +0000 UTC m=+35.018673402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.905477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.905518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.905536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.905559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:39 crc kubenswrapper[4865]: I0103 04:16:39.905576 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:39Z","lastTransitionTime":"2026-01-03T04:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.001462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.001602 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.001636 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.001651 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.001720 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:48.001699348 +0000 UTC m=+35.118752543 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.008803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.008863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.008874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.008895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.008912 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.112372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.112444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.112456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.112477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.112489 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.155719 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.155723 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.156297 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.156320 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.155813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:40 crc kubenswrapper[4865]: E0103 04:16:40.156909 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.215070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.215310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.215434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.215606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.215737 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.318978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.319034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.319047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.319070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.319085 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.422580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.422660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.422683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.422711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.422741 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.433617 4865 generic.go:334] "Generic (PLEG): container finished" podID="6570eea8-b60f-43b1-830a-0f6293f571b8" containerID="51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936" exitCode=0 Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.433669 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerDied","Data":"51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.450274 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.481608 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.507832 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.525643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.525715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.525735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.525764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.525783 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.530975 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.548737 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.566223 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.584833 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.603372 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.617416 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.628558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.628583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.628593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.628605 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.628614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.632947 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.648180 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.666955 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.680747 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.696638 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:40Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.731535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.731704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.731830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.731967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.732148 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.835007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.835242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.835349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.835463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.835560 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.938543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.938607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.938629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.938654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:40 crc kubenswrapper[4865]: I0103 04:16:40.938672 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:40Z","lastTransitionTime":"2026-01-03T04:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.041703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.041770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.041788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.041814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.041832 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.146350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.146444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.146467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.146524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.146542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.249090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.249138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.249156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.249184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.249200 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.352850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.352913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.352935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.352968 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.352993 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.444175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" event={"ID":"6570eea8-b60f-43b1-830a-0f6293f571b8","Type":"ContainerStarted","Data":"9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.454340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.455525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.471210 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.493528 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.494663 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.514566 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.539339 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.562604 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.562681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.562706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.562738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.562759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.576017 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.598959 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.611821 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.623980 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.640855 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.659976 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.665046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.665090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.665102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.665124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.665134 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.677417 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.690464 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.706347 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.726627 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.738683 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.754783 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.767752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.767804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.767821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.767844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.767892 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.773053 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.792216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.810469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.834332 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.864059 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.870211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.870262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.870278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.870306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.870331 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.886813 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.908205 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.929147 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.948564 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.967625 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.972771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.972864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.972888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.972912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.972930 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:41Z","lastTransitionTime":"2026-01-03T04:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.981136 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:41 crc kubenswrapper[4865]: I0103 04:16:41.990957 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:41Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.075574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.075619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.075631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.075662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.075676 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.154844 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.154900 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.154969 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:42 crc kubenswrapper[4865]: E0103 04:16:42.155016 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:42 crc kubenswrapper[4865]: E0103 04:16:42.155194 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:42 crc kubenswrapper[4865]: E0103 04:16:42.155348 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.178017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.178092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.178103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.178117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.178126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.281106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.281164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.281180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.281203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.281222 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.384270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.384335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.384347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.384362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.384373 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.458241 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.459092 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.487608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.488137 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.488159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.488184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.488201 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.500582 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.519060 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.545589 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.559165 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.580024 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.591045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.591199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.591262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.591336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.591413 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.599805 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.614667 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.634039 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.647628 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.659216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.673965 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693568 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.693883 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.716185 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.741272 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.784398 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:42Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.796277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.796329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.796346 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.796370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.796415 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.898445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.898679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.898744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.898814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:42 crc kubenswrapper[4865]: I0103 04:16:42.898880 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:42Z","lastTransitionTime":"2026-01-03T04:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.001305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.001548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.001611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.001680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.001741 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.104450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.104526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.104544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.104570 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.104585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.169859 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.180580 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.189487 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.200122 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.208036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.208182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.208277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.208343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.208444 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.214655 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.229880 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.242554 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.253538 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.266983 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.294596 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.308946 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.310326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.310364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.310397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.310413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.310426 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.321595 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.341028 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.354288 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.412879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.412911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.412943 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.412963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.412975 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.465550 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.523665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.523721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.523735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.523752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.523763 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.626300 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.626364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.626410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.626435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.626454 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.728865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.728917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.728928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.728945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.728960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.763271 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.830968 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.831003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.831011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.831024 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.831033 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.933117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.933171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.933187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.933210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:43 crc kubenswrapper[4865]: I0103 04:16:43.933229 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:43Z","lastTransitionTime":"2026-01-03T04:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.035719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.035789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.035806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.035833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.035851 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.137999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.138036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.138048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.138086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.138100 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.155466 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.155502 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.155517 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:44 crc kubenswrapper[4865]: E0103 04:16:44.156141 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:44 crc kubenswrapper[4865]: E0103 04:16:44.156298 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:44 crc kubenswrapper[4865]: E0103 04:16:44.156393 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.240281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.240346 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.240367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.240437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.240463 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.342955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.343020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.343040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.343064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.343084 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.446336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.446441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.446465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.446491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.446516 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.472082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/0.log" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.477286 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6" exitCode=1 Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.477335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.479519 4865 scope.go:117] "RemoveContainer" containerID="7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.499649 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.520239 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.537300 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550123 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.550777 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.567135 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.581891 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.600453 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.626955 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.640766 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.652797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.652843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.652854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.652875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.652887 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.659550 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.685122 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:44Z\\\",\\\"message\\\":\\\" 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0103 04:16:43.522094 6151 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0103 04:16:43.522111 6151 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0103 04:16:43.522146 6151 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0103 04:16:43.522162 6151 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0103 04:16:43.522177 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0103 04:16:43.522233 6151 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0103 04:16:43.522251 6151 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0103 04:16:43.522282 6151 factory.go:656] Stopping watch factory\\\\nI0103 04:16:43.522297 6151 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:16:43.522322 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0103 04:16:43.522334 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0103 04:16:43.522342 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0103 04:16:43.522350 6151 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0103 04:16:43.522356 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0103 04:16:43.522363 6151 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.703435 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.720991 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.739463 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:44Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.756478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.756559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.756581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.756610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.756632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.860096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.860190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.860747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.860827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.861123 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.964724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.964788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.964810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.964841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:44 crc kubenswrapper[4865]: I0103 04:16:44.964864 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:44Z","lastTransitionTime":"2026-01-03T04:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.067652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.067698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.067709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.067729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.067742 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.170417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.170470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.170488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.170512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.170528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.272998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.273356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.273367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.273398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.273411 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.376013 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.376049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.376057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.376070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.376080 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.479265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.479312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.479323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.479342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.479354 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.482678 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/0.log" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.486409 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.486565 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.502674 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.517605 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.532869 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.557176 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.579974 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:44Z\\\",\\\"message\\\":\\\" 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0103 04:16:43.522094 6151 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0103 04:16:43.522111 6151 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0103 04:16:43.522146 6151 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0103 04:16:43.522162 6151 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0103 04:16:43.522177 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0103 04:16:43.522233 6151 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0103 04:16:43.522251 6151 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0103 04:16:43.522282 6151 factory.go:656] Stopping watch factory\\\\nI0103 04:16:43.522297 6151 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:16:43.522322 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0103 04:16:43.522334 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0103 04:16:43.522342 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0103 04:16:43.522350 6151 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0103 04:16:43.522356 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0103 04:16:43.522363 6151 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.581569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.581661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.581730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.581801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.581860 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.600729 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.618578 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.633858 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.649278 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.662673 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.682840 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.683942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.683978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.683990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.684007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.684018 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.695131 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.708156 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.719711 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:45Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.786861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.786919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.786930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.786949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.786963 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.889608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.889663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.889676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.889695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.889708 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.992057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.992103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.992117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.992134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:45 crc kubenswrapper[4865]: I0103 04:16:45.992147 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:45Z","lastTransitionTime":"2026-01-03T04:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.094789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.094857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.094875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.094899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.094917 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.154670 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.154805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.154986 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.155507 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.155652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.155927 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.198443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.198509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.198532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.198563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.198580 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.199997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.200055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.200082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.200132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.200145 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.221784 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.226907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.226940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.226951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.226969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.226981 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.240806 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.245218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.245259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.245270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.245290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.245301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.260434 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.265653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.265681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.265690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.265705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.265714 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.285840 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.291127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.291163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.291175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.291192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.291204 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.308729 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.308872 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.311267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.311297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.311306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.311319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.311329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.399192 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt"] Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.400048 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.404077 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.404188 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.413752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.413791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.413802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.413817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.413827 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.427084 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.446799 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.464663 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.465301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwtw\" (UniqueName: \"kubernetes.io/projected/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-kube-api-access-gqwtw\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.465568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.465753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.465966 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.488170 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.492519 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/1.log" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.493950 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/0.log" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.498333 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5" exitCode=1 Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.498416 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.498501 4865 scope.go:117] "RemoveContainer" containerID="7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.500375 4865 scope.go:117] "RemoveContainer" containerID="70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5" Jan 03 04:16:46 crc kubenswrapper[4865]: E0103 04:16:46.500795 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.513367 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:44Z\\\",\\\"message\\\":\\\" 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0103 04:16:43.522094 6151 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0103 04:16:43.522111 6151 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0103 04:16:43.522146 6151 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0103 04:16:43.522162 6151 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0103 04:16:43.522177 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0103 04:16:43.522233 6151 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0103 04:16:43.522251 6151 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0103 04:16:43.522282 6151 factory.go:656] Stopping watch factory\\\\nI0103 04:16:43.522297 6151 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:16:43.522322 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0103 04:16:43.522334 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0103 04:16:43.522342 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0103 04:16:43.522350 6151 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0103 04:16:43.522356 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0103 04:16:43.522363 6151 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.516046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.516107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.516132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.516165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.516189 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.540119 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.556595 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.566865 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.567561 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.567736 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwtw\" (UniqueName: \"kubernetes.io/projected/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-kube-api-access-gqwtw\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.567846 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.568370 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.568792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.574619 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.575633 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.589130 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.596716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwtw\" (UniqueName: \"kubernetes.io/projected/e4fe3f7a-8ad0-4271-a182-96a880ab89c7-kube-api-access-gqwtw\") pod \"ovnkube-control-plane-749d76644c-phsbt\" (UID: \"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.612869 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.620687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.620745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.620760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.620781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.620794 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.631504 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.651251 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.666885 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.684448 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.703309 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724086 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.724747 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.729626 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.745659 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: W0103 04:16:46.753156 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4fe3f7a_8ad0_4271_a182_96a880ab89c7.slice/crio-5c5a1f3fc20ecbe498cb4fa758570c791882ffc686ab38ef95e7ad0f2c8d68c7 WatchSource:0}: Error finding container 5c5a1f3fc20ecbe498cb4fa758570c791882ffc686ab38ef95e7ad0f2c8d68c7: Status 404 returned error can't find the container with id 5c5a1f3fc20ecbe498cb4fa758570c791882ffc686ab38ef95e7ad0f2c8d68c7 Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.766126 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.784758 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.815828 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d9a019bee86a0c9606c70b07c9b898fe7bad886aad8a34922f2a29becfe07e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:44Z\\\",\\\"message\\\":\\\" 6151 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0103 04:16:43.522094 6151 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0103 04:16:43.522111 6151 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0103 04:16:43.522146 6151 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0103 04:16:43.522162 6151 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0103 04:16:43.522177 6151 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0103 04:16:43.522233 6151 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0103 04:16:43.522251 6151 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0103 04:16:43.522282 6151 factory.go:656] Stopping watch factory\\\\nI0103 04:16:43.522297 6151 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:16:43.522322 6151 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0103 04:16:43.522334 6151 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0103 04:16:43.522342 6151 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0103 04:16:43.522350 6151 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0103 04:16:43.522356 6151 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0103 04:16:43.522363 6151 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.827676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.827737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.827766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.827800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.827826 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.839755 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.855115 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.871938 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.895336 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.920918 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.931267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.931316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.931335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.931361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.931419 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:46Z","lastTransitionTime":"2026-01-03T04:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.936129 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.955288 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:46 crc kubenswrapper[4865]: I0103 04:16:46.974925 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.001186 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:46Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.016349 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.033252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.033283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.033294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.033310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.033321 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.136167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.136245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.136264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.136294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.136315 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.239240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.239306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.239323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.239353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.239372 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.343301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.343354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.343373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.343444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.343509 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.446958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.447017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.447036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.447091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.447109 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.505145 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" event={"ID":"e4fe3f7a-8ad0-4271-a182-96a880ab89c7","Type":"ContainerStarted","Data":"5c5a1f3fc20ecbe498cb4fa758570c791882ffc686ab38ef95e7ad0f2c8d68c7"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.508224 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/1.log" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.514574 4865 scope.go:117] "RemoveContainer" containerID="70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5" Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.514855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.533558 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.550659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.550715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.550733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.550760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.550781 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.556170 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.578094 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.600496 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.622865 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.643202 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.654057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.654108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.654125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.654151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.654169 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.664473 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.683904 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.708089 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.742544 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.757377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.757489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.757512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.757547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.757570 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.770510 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.795072 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.809746 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.829996 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.859011 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.860447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.860515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.860585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.860612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.860701 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.933618 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wb9c7"] Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.934586 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.934703 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.958693 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.965916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.966027 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.966054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.966084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.966106 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:47Z","lastTransitionTime":"2026-01-03T04:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.981202 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.981373 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.981515 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:17:03.981489633 +0000 UTC m=+51.098542858 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.981766 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.981832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.981896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982072 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982116 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982163 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982192 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982209 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:03.982174722 +0000 UTC m=+51.099227947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982214 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982267 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:03.982226803 +0000 UTC m=+51.099280018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:16:47 crc kubenswrapper[4865]: E0103 04:16:47.982303 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:03.982288315 +0000 UTC m=+51.099341540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.982302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.982447 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x7xt\" (UniqueName: \"kubernetes.io/projected/20f5ddd2-fabb-45db-83ad-9c45135ec710-kube-api-access-7x7xt\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:47 crc kubenswrapper[4865]: I0103 04:16:47.999758 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:47Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.026803 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.056095 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.069563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.069629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.069646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.069671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.069689 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.083451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.083517 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.083547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x7xt\" (UniqueName: \"kubernetes.io/projected/20f5ddd2-fabb-45db-83ad-9c45135ec710-kube-api-access-7x7xt\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.083938 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.083963 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.083976 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.084029 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:04.08401427 +0000 UTC m=+51.201067465 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.084659 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.084982 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:48.584941366 +0000 UTC m=+35.701994631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.091212 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.104851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x7xt\" (UniqueName: \"kubernetes.io/projected/20f5ddd2-fabb-45db-83ad-9c45135ec710-kube-api-access-7x7xt\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.108183 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.121203 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.134566 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.150216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.154681 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.154687 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.155115 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.154706 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.155471 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.155126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.168413 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.171919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.171974 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.171991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.172017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.172035 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.182103 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.200355 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.213466 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.230359 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.241119 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.274648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.274702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.274720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.274746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.274761 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.376784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.376851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.376876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.376908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.376928 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.480149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.480199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.480216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.480238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.480255 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.520142 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" event={"ID":"e4fe3f7a-8ad0-4271-a182-96a880ab89c7","Type":"ContainerStarted","Data":"4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.520210 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" event={"ID":"e4fe3f7a-8ad0-4271-a182-96a880ab89c7","Type":"ContainerStarted","Data":"e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.546899 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.563286 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.573818 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.578820 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.582808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.582900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.582913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.582933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.582946 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.590422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.590579 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: E0103 04:16:48.590672 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:49.590652602 +0000 UTC m=+36.707705797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.595214 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.611154 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.656365 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.680230 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.684998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.685056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.685077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.685103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.685120 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.697079 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.708398 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.723822 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.740882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.760709 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.779187 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.787650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.787704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.787723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.788267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.788288 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.797162 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.811548 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.825365 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.839073 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.868666 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.889330 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.890811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.890858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.890870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.890892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.890908 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.910286 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.932757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.948705 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.969718 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.983992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.993029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.993080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.993094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.993114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.993127 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:48Z","lastTransitionTime":"2026-01-03T04:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:48 crc kubenswrapper[4865]: I0103 04:16:48.996850 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:48Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.009531 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.023256 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.038882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.058425 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.073725 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.089589 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.095261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.095301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.095316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.095336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.095352 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.136702 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:49Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.155225 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:49 crc kubenswrapper[4865]: E0103 04:16:49.155406 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.197240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.197276 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.197286 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.197303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.197314 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.299670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.300074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.300093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.300119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.300142 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.403422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.403479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.403500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.403524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.403543 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.506753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.506808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.506824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.506846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.506864 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.600369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:49 crc kubenswrapper[4865]: E0103 04:16:49.600664 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:49 crc kubenswrapper[4865]: E0103 04:16:49.600782 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:51.600752938 +0000 UTC m=+38.717806153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.609219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.609279 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.609295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.609321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.609339 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.713072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.713159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.713181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.713213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.713235 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.815935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.815998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.816020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.816048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.816066 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.918759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.918820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.918838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.918862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:49 crc kubenswrapper[4865]: I0103 04:16:49.918878 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:49Z","lastTransitionTime":"2026-01-03T04:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.022369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.022460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.022477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.022502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.022519 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.124508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.124545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.124558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.124571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.124579 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.155328 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.155348 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.155352 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:50 crc kubenswrapper[4865]: E0103 04:16:50.155535 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:50 crc kubenswrapper[4865]: E0103 04:16:50.155655 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:50 crc kubenswrapper[4865]: E0103 04:16:50.155779 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.228739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.228773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.228781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.228794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.228804 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.332686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.332741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.332760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.332785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.332801 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.435788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.435862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.435884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.435917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.435942 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.538300 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.538361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.538425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.538450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.538467 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.641942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.641993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.642009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.642031 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.642047 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.745628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.745687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.745712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.745741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.745809 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.848246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.848304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.848322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.848347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.848367 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.951917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.951991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.952015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.952047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:50 crc kubenswrapper[4865]: I0103 04:16:50.952068 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:50Z","lastTransitionTime":"2026-01-03T04:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.055224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.055295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.055312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.055336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.055353 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.155552 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:51 crc kubenswrapper[4865]: E0103 04:16:51.155931 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.158119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.158175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.158194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.158218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.158235 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.260592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.260652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.260669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.260692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.260709 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.364554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.364616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.364671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.364695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.364711 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.467724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.467799 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.467821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.467850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.467871 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.570881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.570943 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.570961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.570988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.571006 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.623476 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:51 crc kubenswrapper[4865]: E0103 04:16:51.623639 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:51 crc kubenswrapper[4865]: E0103 04:16:51.623733 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:16:55.623707387 +0000 UTC m=+42.740760602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.674426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.674486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.674502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.674525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.674542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.777667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.777744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.777765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.777794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.777815 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.879977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.880049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.880066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.880090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.880277 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.982522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.982573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.982585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.982602 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:51 crc kubenswrapper[4865]: I0103 04:16:51.982613 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:51Z","lastTransitionTime":"2026-01-03T04:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.086004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.086121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.086143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.086178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.086199 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.154696 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.154854 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.154717 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:52 crc kubenswrapper[4865]: E0103 04:16:52.155108 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:52 crc kubenswrapper[4865]: E0103 04:16:52.154891 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:52 crc kubenswrapper[4865]: E0103 04:16:52.155314 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.189520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.189599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.189619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.189647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.189665 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.293550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.293620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.293637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.293665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.293682 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.396310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.396430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.396471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.396507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.396530 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.501927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.501999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.502022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.502048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.502065 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.605350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.605467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.605491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.605520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.605540 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.708401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.708443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.708456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.708476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.708487 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.811549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.811614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.811631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.811656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.811678 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.914625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.914700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.914725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.914760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:52 crc kubenswrapper[4865]: I0103 04:16:52.914783 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:52Z","lastTransitionTime":"2026-01-03T04:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.018593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.018884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.019033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.019215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.019345 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.122606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.122668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.122687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.122712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.122730 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.155601 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:53 crc kubenswrapper[4865]: E0103 04:16:53.155884 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.174180 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.196130 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.216981 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.225420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.225482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.225502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.225530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.225550 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.236897 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.254216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.278335 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.318316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.328146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.328211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.328237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.328265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.328287 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.335311 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.352199 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.368208 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.384455 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.402284 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.422492 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.431134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.431175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.431191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.431210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.431226 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.443771 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.463509 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.482260 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:53Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.541966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.542023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.542045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.542078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.542100 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.645311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.645747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.645918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.646061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.646192 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.749463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.749549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.749572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.749604 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.749627 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.852112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.852204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.852232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.852262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.852280 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.955174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.955242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.955265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.955295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:53 crc kubenswrapper[4865]: I0103 04:16:53.955316 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:53Z","lastTransitionTime":"2026-01-03T04:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.059146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.059204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.059222 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.059251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.059269 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.155426 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.155573 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:54 crc kubenswrapper[4865]: E0103 04:16:54.155734 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:54 crc kubenswrapper[4865]: E0103 04:16:54.155852 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.155924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:54 crc kubenswrapper[4865]: E0103 04:16:54.156026 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.162582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.162645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.162668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.162695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.162716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.265930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.265999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.266023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.266050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.266072 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.368417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.368463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.368472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.368486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.368527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.471630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.471666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.471677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.471695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.471706 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.574910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.574949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.574958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.574972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.574982 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.677890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.678012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.678077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.678104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.678166 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.781551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.781613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.781632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.781659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.781677 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.885029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.885091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.885111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.885136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.885155 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.988762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.988824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.988841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.988865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:54 crc kubenswrapper[4865]: I0103 04:16:54.988883 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:54Z","lastTransitionTime":"2026-01-03T04:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.092340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.092428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.092446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.092476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.092493 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.155700 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:55 crc kubenswrapper[4865]: E0103 04:16:55.155870 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.194728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.194782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.194798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.194816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.194827 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.297725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.297756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.297764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.297779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.297787 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.399634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.399698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.399720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.399750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.399774 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.503869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.504202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.504326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.504505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.504681 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.608721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.609047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.609214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.609338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.609548 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.666985 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:55 crc kubenswrapper[4865]: E0103 04:16:55.667315 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:55 crc kubenswrapper[4865]: E0103 04:16:55.667475 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:03.667439835 +0000 UTC m=+50.784493060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.713465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.713714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.713839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.713922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.713998 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.817500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.818769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.818974 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.819176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.819586 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.923075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.923117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.923131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.923149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:55 crc kubenswrapper[4865]: I0103 04:16:55.923158 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:55Z","lastTransitionTime":"2026-01-03T04:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.025953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.026003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.026018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.026037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.026052 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.128934 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.128981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.128993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.129010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.129028 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.155129 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.155429 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.155185 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.155142 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.155604 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.155778 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.232432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.232484 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.232501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.232520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.232533 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.336148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.336228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.336250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.336281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.336302 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.439498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.439889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.440037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.440214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.440346 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.544015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.544275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.544420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.544555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.544655 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.601827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.601892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.601910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.601939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.601959 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.616208 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:56Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.620903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.620979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.621003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.621037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.621061 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.641056 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:56Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.645810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.645859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.645871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.645889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.645902 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.662613 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:56Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.668020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.668090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.668115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.668141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.668158 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.682087 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:56Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.686303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.686362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.686402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.686430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.686447 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.699008 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:16:56Z is after 2025-08-24T17:21:41Z" Jan 03 04:16:56 crc kubenswrapper[4865]: E0103 04:16:56.699228 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.701458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.701510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.701528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.701552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.701568 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.804940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.805038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.805062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.805097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.805182 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.907525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.907591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.907608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.907633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:56 crc kubenswrapper[4865]: I0103 04:16:56.907648 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:56Z","lastTransitionTime":"2026-01-03T04:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.011424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.011479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.011492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.011511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.011526 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.114709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.114767 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.114783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.114805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.114818 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.155893 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:57 crc kubenswrapper[4865]: E0103 04:16:57.156138 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.218333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.218429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.218448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.218480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.218503 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.321499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.321559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.321574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.321594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.321609 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.424527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.424571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.424583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.424599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.424610 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.527631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.527705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.527737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.527765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.527786 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.630572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.630633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.630656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.630686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.630706 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.733625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.733725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.733743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.733770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.733788 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.836420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.836478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.836495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.836517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.836537 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.940253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.940330 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.940353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.940429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:57 crc kubenswrapper[4865]: I0103 04:16:57.940454 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:57Z","lastTransitionTime":"2026-01-03T04:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.043006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.043063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.043075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.043092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.043112 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.146915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.146978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.146999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.147026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.147044 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.155308 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.155421 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.155459 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:16:58 crc kubenswrapper[4865]: E0103 04:16:58.155548 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:16:58 crc kubenswrapper[4865]: E0103 04:16:58.155673 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:16:58 crc kubenswrapper[4865]: E0103 04:16:58.155799 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.250011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.250077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.250100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.250131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.250153 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.353053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.353111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.353124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.353143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.353157 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.455917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.455979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.455997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.456022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.456042 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.559843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.559881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.559890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.559904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.559912 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.663256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.663323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.663342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.663369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.663421 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.766958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.767015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.767035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.767056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.767077 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.870503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.870578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.870598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.870635 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.870655 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.972901 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.972940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.972949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.972963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:58 crc kubenswrapper[4865]: I0103 04:16:58.972972 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:58Z","lastTransitionTime":"2026-01-03T04:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.076095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.076162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.076179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.076205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.076221 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.155768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:16:59 crc kubenswrapper[4865]: E0103 04:16:59.155965 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.157904 4865 scope.go:117] "RemoveContainer" containerID="70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.178694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.178753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.178774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.178800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.178818 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.283644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.283711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.283730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.283758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.283778 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.387734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.387798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.387812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.387838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.387854 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.490923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.490982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.491000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.491026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.491044 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.575103 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/1.log" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.579716 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.594496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.594556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.594581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.594612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.594637 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.699969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.700037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.700065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.700091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.700111 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.803727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.803811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.803828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.803854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.803872 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.905925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.905987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.906005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.906030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:16:59 crc kubenswrapper[4865]: I0103 04:16:59.906048 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:16:59Z","lastTransitionTime":"2026-01-03T04:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.009034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.009126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.009160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.009192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.009214 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.111905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.111961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.111976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.111998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.112012 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.154768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.154815 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.154768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:00 crc kubenswrapper[4865]: E0103 04:17:00.155004 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:00 crc kubenswrapper[4865]: E0103 04:17:00.155080 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:00 crc kubenswrapper[4865]: E0103 04:17:00.155225 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.215333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.215373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.215413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.215434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.215447 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.322272 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.322325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.322343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.322365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.322412 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.424616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.424660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.424672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.424688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.424700 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.526619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.526677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.526695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.526718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.526734 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.584845 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/2.log" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.585770 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/1.log" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.589149 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" exitCode=1 Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.589238 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.589301 4865 scope.go:117] "RemoveContainer" containerID="70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.589794 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:00 crc kubenswrapper[4865]: E0103 04:17:00.589968 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.607742 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.629152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.629201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.629218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.629242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.629260 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.632662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.659047 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cc45d55e13550859a2f501ed014e74e7a808bda6b89ad77fbc2d27d44f49d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"message\\\":\\\"alancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611300 6273 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:16:45.611286 6273 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0103 04:16:45.611476 6273 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.681918 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.702543 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.723880 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.732867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.732947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.732967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.733323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.733551 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.739088 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.757839 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.777349 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.793956 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.810849 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.826528 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.837035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.837098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.837124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.837154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.837176 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.846443 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.865156 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.885377 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.903479 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:00Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.940061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.940104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.940119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.940143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:00 crc kubenswrapper[4865]: I0103 04:17:00.940160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:00Z","lastTransitionTime":"2026-01-03T04:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.043266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.043340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.043354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.043373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.043424 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.146068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.146135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.146154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.146179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.146196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.155533 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:01 crc kubenswrapper[4865]: E0103 04:17:01.155723 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.249257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.249322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.249345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.249374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.249438 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.352509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.352571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.352591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.352615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.352634 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.455363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.455490 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.455508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.455534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.455552 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.514013 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.558663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.558727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.558741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.558765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.558781 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.600766 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/2.log" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.607618 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:01 crc kubenswrapper[4865]: E0103 04:17:01.607890 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.626452 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.651547 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.661709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.661782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.661796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.661820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.661836 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.674250 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.693498 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.708730 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.724247 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.745785 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.765267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.765327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.765355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.765421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.765445 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.778136 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.802464 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.825323 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.846357 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.865256 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.868505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.868567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.868587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.868614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.868631 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.887680 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.908690 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.925244 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.941774 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:01Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.971513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.971592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.971611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.971639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:01 crc kubenswrapper[4865]: I0103 04:17:01.971658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:01Z","lastTransitionTime":"2026-01-03T04:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.074530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.074586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.074606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.074630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.074648 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.154878 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.154955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:02 crc kubenswrapper[4865]: E0103 04:17:02.155093 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:02 crc kubenswrapper[4865]: E0103 04:17:02.155240 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.155638 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:02 crc kubenswrapper[4865]: E0103 04:17:02.155762 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.178205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.178278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.178295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.178319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.178338 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.281455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.281530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.281550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.281580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.281599 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.385442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.385505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.385521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.385545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.385562 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.488702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.488754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.488770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.488814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.488833 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.592209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.592265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.592286 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.592311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.592329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.610769 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:02 crc kubenswrapper[4865]: E0103 04:17:02.611029 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.694613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.694676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.694693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.694718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.694733 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.798309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.798361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.798372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.798448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.798462 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.901543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.901602 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.901619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.901643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:02 crc kubenswrapper[4865]: I0103 04:17:02.901661 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:02Z","lastTransitionTime":"2026-01-03T04:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.004957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.005015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.005033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.005058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.005080 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.108000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.108061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.108086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.108116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.108139 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.156416 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:03 crc kubenswrapper[4865]: E0103 04:17:03.156715 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.176673 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.191190 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.211229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.211281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.211296 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.211317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.211331 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.214584 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.248496 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.271657 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.296624 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.316354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.316444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.316464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.316492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.316511 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.321092 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.341255 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.361708 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.381493 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.397262 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.413306 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.419486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.419556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.419574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.419600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.419620 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.433351 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.462537 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.502104 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.521495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.521543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.521556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.521576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.521589 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.529313 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:03Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.624980 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.625039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.625056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.625080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.625096 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.728081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.728136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.728158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.728183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.728201 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.765095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:03 crc kubenswrapper[4865]: E0103 04:17:03.765355 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:03 crc kubenswrapper[4865]: E0103 04:17:03.765499 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:19.765471018 +0000 UTC m=+66.882524243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.831295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.831368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.831434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.831466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.831491 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.934344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.934434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.934458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.934486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:03 crc kubenswrapper[4865]: I0103 04:17:03.934507 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:03Z","lastTransitionTime":"2026-01-03T04:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.038171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.038243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.038267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.038312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.038336 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.068783 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.068925 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069027 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:17:36.068986818 +0000 UTC m=+83.186040043 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069057 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069131 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:36.069110312 +0000 UTC m=+83.186163527 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.069162 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.069223 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069371 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069371 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069429 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069447 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069484 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:36.069468821 +0000 UTC m=+83.186522046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.069514 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:36.069499652 +0000 UTC m=+83.186552867 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.141646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.141712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.141735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.141760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.141776 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.155241 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.155278 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.155447 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.155519 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.155696 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.155885 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.170879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.171081 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.171117 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.171144 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:04 crc kubenswrapper[4865]: E0103 04:17:04.171207 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:36.171184157 +0000 UTC m=+83.288237372 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.244480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.244543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.244560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.244585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.244603 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.347768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.347835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.347855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.347880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.347897 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.451664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.451744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.451766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.451795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.451820 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.555435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.555499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.555517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.555543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.555561 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.660446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.660522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.660545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.660569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.660587 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.763004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.763086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.763110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.763143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.763164 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.866829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.866893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.866914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.866940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.866960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.970153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.970190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.970200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.970216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:04 crc kubenswrapper[4865]: I0103 04:17:04.970225 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:04Z","lastTransitionTime":"2026-01-03T04:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.072675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.072740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.072759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.072782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.072801 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.155435 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:05 crc kubenswrapper[4865]: E0103 04:17:05.155703 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.175185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.175236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.175245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.175259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.175269 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.270898 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.277785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.277821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.277834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.277852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.277863 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.288245 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.292567 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.312187 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.326143 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.341699 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.359569 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.382637 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.382961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.382977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.382985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.383002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.383013 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.399931 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.413829 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.428886 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.439332 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.454712 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.474880 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.485763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.485817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.485834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.485858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.485877 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.488803 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.500662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.513703 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.526894 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:05Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.592206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.592740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.592788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.592813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.592834 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.696013 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.696081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.696090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.696111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.696126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.799807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.799883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.799905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.799936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.799954 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.903565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.903638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.903655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.903724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:05 crc kubenswrapper[4865]: I0103 04:17:05.903771 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:05Z","lastTransitionTime":"2026-01-03T04:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.007524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.007600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.007617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.007648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.007668 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.111931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.111999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.112017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.112048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.112069 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.155339 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.155455 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:06 crc kubenswrapper[4865]: E0103 04:17:06.155570 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:06 crc kubenswrapper[4865]: E0103 04:17:06.155749 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.156111 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:06 crc kubenswrapper[4865]: E0103 04:17:06.156452 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.215253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.215312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.215334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.215358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.215376 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.318260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.318337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.318360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.318423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.318442 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.421697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.421798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.421823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.421859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.421888 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.540351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.540444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.540467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.540495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.540516 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.643247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.643343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.643363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.643418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.643437 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.746973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.747027 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.747044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.747070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.747088 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.850793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.850857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.850875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.850899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.850918 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.954891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.954955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.954975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.955004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:06 crc kubenswrapper[4865]: I0103 04:17:06.955022 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:06Z","lastTransitionTime":"2026-01-03T04:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.058425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.058484 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.058501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.058530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.058550 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.092727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.092779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.092797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.092821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.092839 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.111438 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:07Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.116165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.116212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.116228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.116250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.116268 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.134757 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:07Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.138737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.138788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.138807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.138827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.138842 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.155597 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.155890 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.157092 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:07Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.162630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.162678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.162706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.162735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.162759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.181207 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:07Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.185907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.185979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.185996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.186019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.186032 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.210853 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:07Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:07 crc kubenswrapper[4865]: E0103 04:17:07.211015 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.213423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.213477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.213497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.213526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.213546 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.317128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.317190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.317208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.317234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.317251 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.420342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.420407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.420420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.420439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.420452 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.523573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.523637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.523659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.523689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.523711 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.627131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.627197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.627214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.627242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.627260 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.730841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.730916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.730942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.730972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.730994 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.834140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.834222 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.834247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.834286 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.834310 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.937275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.937315 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.937325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.937339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:07 crc kubenswrapper[4865]: I0103 04:17:07.937350 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:07Z","lastTransitionTime":"2026-01-03T04:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.039799 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.039852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.039865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.039885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.039901 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.142897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.142953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.142971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.143008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.143026 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.154646 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.154733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:08 crc kubenswrapper[4865]: E0103 04:17:08.154827 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.154945 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:08 crc kubenswrapper[4865]: E0103 04:17:08.154997 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:08 crc kubenswrapper[4865]: E0103 04:17:08.155038 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.245572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.245645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.245666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.245696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.245720 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.347704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.347740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.347751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.347769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.347781 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.451156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.451207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.451219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.451237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.451250 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.554467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.554524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.554541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.554565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.554582 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.656733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.656813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.656838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.656917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.656974 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.760609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.760744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.760809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.760837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.760894 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.863829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.863880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.863893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.863913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.863927 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.967830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.967880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.967894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.967911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:08 crc kubenswrapper[4865]: I0103 04:17:08.967923 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:08Z","lastTransitionTime":"2026-01-03T04:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.070129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.070193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.070214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.070241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.070259 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.155009 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:09 crc kubenswrapper[4865]: E0103 04:17:09.155262 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.173281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.173332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.173344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.173361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.173375 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.275709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.275755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.275764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.275781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.275794 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.378495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.378539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.378549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.378565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.378575 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.481736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.481816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.481840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.481873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.481898 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.585190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.585242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.585258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.585283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.585300 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.688633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.688691 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.688708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.688732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.688751 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.792177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.792253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.792274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.792300 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.792318 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.895713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.895789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.895808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.895835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.895853 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.998837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.998882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.998892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.998911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:09 crc kubenswrapper[4865]: I0103 04:17:09.998922 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:09Z","lastTransitionTime":"2026-01-03T04:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.101620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.101673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.101690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.101714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.101731 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.154798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:10 crc kubenswrapper[4865]: E0103 04:17:10.154938 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.154804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.154795 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:10 crc kubenswrapper[4865]: E0103 04:17:10.155026 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:10 crc kubenswrapper[4865]: E0103 04:17:10.155159 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.204420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.204496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.204516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.204541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.204558 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.307200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.307253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.307269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.307292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.307309 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.411592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.411687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.411714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.411748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.411770 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.514113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.514150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.514163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.514178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.514188 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.617466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.617527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.617539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.617559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.617571 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.720211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.720287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.720310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.720341 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.720363 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.822887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.822962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.822987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.823019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.823039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.926011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.926082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.926106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.926134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:10 crc kubenswrapper[4865]: I0103 04:17:10.926154 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:10Z","lastTransitionTime":"2026-01-03T04:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.029166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.029232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.029252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.029278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.029295 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.132802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.132853 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.132870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.132931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.132948 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.155176 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:11 crc kubenswrapper[4865]: E0103 04:17:11.155347 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.235963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.236041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.236065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.236094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.236121 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.338556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.338616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.338637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.338660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.338677 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.441362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.441456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.441467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.441492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.441517 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.544819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.544877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.544893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.544915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.544930 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.648200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.648288 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.648310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.648342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.648363 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.752247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.752425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.752453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.752480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.752496 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.855484 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.855568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.855581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.855608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.855623 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.960540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.960612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.960634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.960676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:11 crc kubenswrapper[4865]: I0103 04:17:11.960697 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:11Z","lastTransitionTime":"2026-01-03T04:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.064367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.064456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.064474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.064500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.064519 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.155371 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.155463 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.155450 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:12 crc kubenswrapper[4865]: E0103 04:17:12.155640 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:12 crc kubenswrapper[4865]: E0103 04:17:12.155839 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:12 crc kubenswrapper[4865]: E0103 04:17:12.155989 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.168054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.168114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.168139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.168168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.168188 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.271125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.271207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.271229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.271262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.271284 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.374427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.374514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.374533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.374558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.374578 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.479153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.479210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.479227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.479255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.479276 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.582444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.582497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.582540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.582567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.582586 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.687658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.687731 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.687748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.687780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.687802 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.790969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.791038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.791060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.791092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.791114 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.895248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.895331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.895358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.895430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.895456 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.999249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.999311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.999328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.999353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:12 crc kubenswrapper[4865]: I0103 04:17:12.999370 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:12Z","lastTransitionTime":"2026-01-03T04:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.102676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.103134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.103149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.103177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.103193 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.155151 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:13 crc kubenswrapper[4865]: E0103 04:17:13.155322 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.176860 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.196556 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.207192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.207272 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.207298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.207335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.207363 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.219946 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.239193 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.263015 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.281825 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.310872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.310942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.310955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.310978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.310995 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.315348 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.340008 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.364439 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.385117 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.405250 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.414018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.414098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.414116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.414144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.414162 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.431892 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.455979 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.478013 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.495357 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.515000 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.517990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.518038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.518055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.518083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.518101 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.535471 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:13Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.620574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.620802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.620891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.620986 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.621082 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.724471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.724542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.724562 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.724591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.724611 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.827851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.827912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.827935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.827965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.827990 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.931289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.931327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.931338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.931354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:13 crc kubenswrapper[4865]: I0103 04:17:13.931364 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:13Z","lastTransitionTime":"2026-01-03T04:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.034095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.034203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.034221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.034245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.034263 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.137756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.137824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.137843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.137868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.137886 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.155461 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:14 crc kubenswrapper[4865]: E0103 04:17:14.155629 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.155873 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:14 crc kubenswrapper[4865]: E0103 04:17:14.156065 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.156294 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:14 crc kubenswrapper[4865]: E0103 04:17:14.156705 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.241045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.241110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.241130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.241155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.241173 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.344310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.344370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.344416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.344442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.344459 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.447587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.447645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.447663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.447686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.447704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.550423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.550476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.550492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.550514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.550531 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.653442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.653498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.653517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.653541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.653558 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.756834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.757147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.757304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.757606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.757833 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.860822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.861142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.861318 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.861515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.861665 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.965250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.965283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.965294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.965313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:14 crc kubenswrapper[4865]: I0103 04:17:14.965325 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:14Z","lastTransitionTime":"2026-01-03T04:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.067263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.067309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.067322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.067343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.067355 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.157828 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:15 crc kubenswrapper[4865]: E0103 04:17:15.157965 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.170238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.170274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.170285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.170301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.170312 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.272527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.272577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.272591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.272611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.272626 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.375406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.375451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.375463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.375481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.375494 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.477660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.477692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.477701 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.477717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.477727 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.580203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.580279 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.580298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.580324 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.580341 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.682848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.682928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.682952 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.682983 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.683005 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.785776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.785809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.785819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.785834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.785844 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.888739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.888793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.888815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.888842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.888864 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.990496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.990696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.990775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.990847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:15 crc kubenswrapper[4865]: I0103 04:17:15.990920 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:15Z","lastTransitionTime":"2026-01-03T04:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.094003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.094059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.094083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.094112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.094134 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.155432 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.155447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:16 crc kubenswrapper[4865]: E0103 04:17:16.155628 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:16 crc kubenswrapper[4865]: E0103 04:17:16.155736 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.155917 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:16 crc kubenswrapper[4865]: E0103 04:17:16.156173 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.197110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.197488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.197641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.197786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.197918 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.301115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.301171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.301188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.301211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.301229 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.404453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.404732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.404850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.404955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.405034 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.507894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.507936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.507945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.507962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.507971 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.611205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.611267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.611278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.611298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.611311 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.714271 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.714333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.714352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.714433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.714453 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.817060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.817097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.817105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.817123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.817132 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.920380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.920453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.920464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.920482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:16 crc kubenswrapper[4865]: I0103 04:17:16.920496 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:16Z","lastTransitionTime":"2026-01-03T04:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.022887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.022922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.022935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.022952 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.022962 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.125042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.125077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.125092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.125116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.125131 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.155191 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.155406 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.156295 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.156561 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.227934 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.227977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.227992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.228009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.228021 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.333523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.333590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.333607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.333634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.333650 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.436119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.436167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.436184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.436209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.436227 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.538732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.538777 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.538793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.538817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.538835 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.609931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.609961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.609969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.609981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.609990 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.621788 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:17Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.625768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.625813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.625824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.625843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.625854 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.639936 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:17Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.643424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.643463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.643475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.643493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.643507 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.658592 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:17Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.662578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.662604 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.662615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.662631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.662645 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.679957 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:17Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.683543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.683573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.683587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.683603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.683615 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.700462 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:17Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:17 crc kubenswrapper[4865]: E0103 04:17:17.700609 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.702418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.702443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.702453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.702468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.702479 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.805296 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.805353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.805362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.805394 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.805405 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.907495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.907534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.907546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.907563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:17 crc kubenswrapper[4865]: I0103 04:17:17.907576 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:17Z","lastTransitionTime":"2026-01-03T04:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.010207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.010269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.010285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.010320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.010334 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.113257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.113319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.113335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.113359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.113376 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.154692 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.154785 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.155047 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:18 crc kubenswrapper[4865]: E0103 04:17:18.155042 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:18 crc kubenswrapper[4865]: E0103 04:17:18.155188 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:18 crc kubenswrapper[4865]: E0103 04:17:18.155364 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.215231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.215312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.215328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.215357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.215370 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.317855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.317916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.317931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.317958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.317976 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.421284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.421342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.421360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.421415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.421434 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.523763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.523835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.523852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.523881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.523903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.627280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.627448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.627469 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.627493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.627514 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.729710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.729775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.729792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.729818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.729836 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.832289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.832350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.832362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.832376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.832420 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.934556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.934626 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.934639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.934659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:18 crc kubenswrapper[4865]: I0103 04:17:18.934675 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:18Z","lastTransitionTime":"2026-01-03T04:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.038208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.038338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.038429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.038460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.038479 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.143091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.143126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.143135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.143151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.143160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.155129 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:19 crc kubenswrapper[4865]: E0103 04:17:19.155441 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.245738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.245852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.245877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.245912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.245936 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.349543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.349627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.349649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.349680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.349706 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.452707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.452793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.452813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.452841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.452881 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.555053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.555098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.555108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.555123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.555133 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.657110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.657161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.657172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.657185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.657195 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.759940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.760015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.760040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.760070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.760093 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.774996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:19 crc kubenswrapper[4865]: E0103 04:17:19.775176 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:19 crc kubenswrapper[4865]: E0103 04:17:19.775269 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:17:51.775239617 +0000 UTC m=+98.892292842 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.863452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.863493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.863502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.863517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.863527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.966172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.966294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.966313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.966333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:19 crc kubenswrapper[4865]: I0103 04:17:19.966345 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:19Z","lastTransitionTime":"2026-01-03T04:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.069091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.069124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.069132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.069144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.069153 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.155014 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.155088 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:20 crc kubenswrapper[4865]: E0103 04:17:20.155170 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.155244 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:20 crc kubenswrapper[4865]: E0103 04:17:20.155364 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:20 crc kubenswrapper[4865]: E0103 04:17:20.155427 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.171596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.171658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.171674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.171700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.171717 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.273599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.273651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.273668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.273690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.273707 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.376333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.376372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.376402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.376417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.376429 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.479110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.479247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.479272 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.479298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.479320 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.582227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.582284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.582300 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.582323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.582341 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.685228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.685263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.685274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.685292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.685302 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.787619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.788026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.788247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.788509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.788742 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.892338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.892433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.892460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.892497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.892520 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.995638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.995698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.995714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.995769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:20 crc kubenswrapper[4865]: I0103 04:17:20.995787 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:20Z","lastTransitionTime":"2026-01-03T04:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.098587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.098648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.098662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.098679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.098692 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.155297 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:21 crc kubenswrapper[4865]: E0103 04:17:21.155788 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.168033 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.201231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.201265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.201274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.201290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.201301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.303628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.303669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.303680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.303694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.303704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.406284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.406322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.406334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.406349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.406360 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.508246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.508273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.508283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.508294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.508301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.610999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.611045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.611059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.611079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.611095 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.676969 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/0.log" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.677054 4865 generic.go:334] "Generic (PLEG): container finished" podID="2fadcfb6-a571-4d6b-af2d-da885a478206" containerID="45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192" exitCode=1 Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.677205 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerDied","Data":"45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.678019 4865 scope.go:117] "RemoveContainer" containerID="45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.691033 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.707671 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.713174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.713198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.713206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.713221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.713229 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.725211 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.738271 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.753175 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.770699 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.789134 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.803187 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.815470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.815557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.815624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.815705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.815764 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.821197 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.837330 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.856156 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.868627 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.886602 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.906367 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.922668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.922722 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.922733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.922749 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.922761 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:21Z","lastTransitionTime":"2026-01-03T04:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.926989 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.945957 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.961731 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:21 crc kubenswrapper[4865]: I0103 04:17:21.974551 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:21Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.029208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.029268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.029283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.029303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.029324 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.132242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.132364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.132458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.132529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.132596 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.155537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:22 crc kubenswrapper[4865]: E0103 04:17:22.155654 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.155560 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:22 crc kubenswrapper[4865]: E0103 04:17:22.155727 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.155550 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:22 crc kubenswrapper[4865]: E0103 04:17:22.155774 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.235190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.235225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.235233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.235245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.235253 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.338259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.338319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.338336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.338362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.338414 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.441598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.442095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.442265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.442449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.442588 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.545449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.545827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.546010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.546160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.546299 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.649571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.649809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.649891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.650021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.650116 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.684196 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/0.log" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.684258 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerStarted","Data":"141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.702741 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.717478 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.732457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.750503 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.753149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.753226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.753253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.753281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.753299 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.767292 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.784119 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.798189 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.816962 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.848940 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.856347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.856396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.856406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.856421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.856431 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.864446 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.880478 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.893765 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.907757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.921675 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.942567 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.957953 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.959885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.959944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.959984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.960008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.960024 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:22Z","lastTransitionTime":"2026-01-03T04:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.975974 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:22 crc kubenswrapper[4865]: I0103 04:17:22.996811 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:22Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.062768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.062842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.062860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.062886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.062906 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.154863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:23 crc kubenswrapper[4865]: E0103 04:17:23.155104 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.165332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.165411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.165429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.165451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.165467 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.173039 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.185340 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.203438 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.224235 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.239157 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.253203 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.267925 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.273711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.273773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.273783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.273860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.273919 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.298474 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.320515 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.340353 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.361596 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.376892 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.380240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.380293 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.380328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.380356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.380377 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.395588 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.413643 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.431286 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.450825 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.468956 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.483140 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:23Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.484157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.484204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.484222 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.484248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.484270 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.586717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.586772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.586789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.586812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.586829 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.689113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.689178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.689197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.689223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.689241 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.792071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.792128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.792141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.792161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.792175 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.894643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.894694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.894708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.894729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.894741 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.997473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.997537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.997553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.997577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:23 crc kubenswrapper[4865]: I0103 04:17:23.997596 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:23Z","lastTransitionTime":"2026-01-03T04:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.100598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.100639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.100648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.100664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.100675 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.155211 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.155296 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.155234 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:24 crc kubenswrapper[4865]: E0103 04:17:24.155367 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:24 crc kubenswrapper[4865]: E0103 04:17:24.155577 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:24 crc kubenswrapper[4865]: E0103 04:17:24.155768 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.206132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.206183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.206194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.206213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.206226 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.308136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.308182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.308194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.308212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.308226 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.410810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.410856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.410867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.410884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.410898 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.512516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.512543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.512551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.512563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.512571 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.615060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.615118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.615136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.615162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.615180 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.717058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.717277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.717410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.717515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.717614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.819555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.819619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.819637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.819664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.819745 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.922199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.922255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.922273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.922298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:24 crc kubenswrapper[4865]: I0103 04:17:24.922316 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:24Z","lastTransitionTime":"2026-01-03T04:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.023954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.024238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.024319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.024421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.024526 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.126325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.126399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.126416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.126437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.126448 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.154935 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:25 crc kubenswrapper[4865]: E0103 04:17:25.155077 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.228237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.228298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.228310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.228331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.228342 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.330424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.330465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.330475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.330494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.330505 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.432616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.432896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.432992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.433083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.433168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.535928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.536164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.536250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.536369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.536482 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.639754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.639814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.639827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.639847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.639860 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.742818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.742912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.742934 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.742967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.742994 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.846297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.846343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.846357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.846376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.846407 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.949580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.949640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.949656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.949681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:25 crc kubenswrapper[4865]: I0103 04:17:25.949699 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:25Z","lastTransitionTime":"2026-01-03T04:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.052460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.052506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.052517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.052537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.052548 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.154703 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.154707 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:26 crc kubenswrapper[4865]: E0103 04:17:26.154855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.154714 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.154997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.155023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.155033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: E0103 04:17:26.154993 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.155048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: E0103 04:17:26.155052 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.155140 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.257875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.257926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.257940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.257961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.257976 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.360656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.360725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.360750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.360780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.360801 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.463940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.463977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.463987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.464001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.464010 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.566645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.566688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.566698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.566713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.566722 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.669005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.669069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.669093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.669142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.669165 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.772107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.772164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.772191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.772220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.772242 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.874538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.874599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.874617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.874640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.874658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.976870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.976906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.976916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.976930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:26 crc kubenswrapper[4865]: I0103 04:17:26.976939 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:26Z","lastTransitionTime":"2026-01-03T04:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.079187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.079250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.079288 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.079316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.079337 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.155112 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.155295 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.181237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.181301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.181319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.181347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.181365 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.284937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.284984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.284999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.285022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.285039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.388290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.388347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.388363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.388412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.388431 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.491375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.491453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.491473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.491497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.491515 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.595062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.595129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.595145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.595170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.595188 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.700144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.700194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.700219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.700244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.700265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.804555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.804618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.804637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.804664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.804687 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.829491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.829547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.829563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.829587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.829604 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.849958 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:27Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.855147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.855191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.855212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.855242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.855265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.877437 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:27Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.884955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.885040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.885065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.885090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.885141 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.901546 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:27Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.906333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.906375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.906411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.906447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.906457 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.922953 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:27Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.928472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.928559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.928613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.928639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.928658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.944262 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:27Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:27 crc kubenswrapper[4865]: E0103 04:17:27.944416 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.946771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.946803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.946814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.946829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:27 crc kubenswrapper[4865]: I0103 04:17:27.946838 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:27Z","lastTransitionTime":"2026-01-03T04:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.049374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.049478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.049497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.049524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.049543 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.152169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.152218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.152230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.152248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.152259 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.155719 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.155738 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.155762 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:28 crc kubenswrapper[4865]: E0103 04:17:28.155934 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:28 crc kubenswrapper[4865]: E0103 04:17:28.156078 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:28 crc kubenswrapper[4865]: E0103 04:17:28.156184 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.157627 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.256124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.256177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.256194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.256221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.256239 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.358927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.358967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.358987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.359013 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.359035 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.462515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.462555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.462573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.462597 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.462614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.565111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.565173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.565191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.565215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.565231 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.668493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.668583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.668660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.668692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.668715 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.706488 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/2.log" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.710155 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.710685 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.744361 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.760478 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.770548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.770588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.770601 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.770619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.770632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.775430 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.788181 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.801229 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.813558 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.827785 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.843043 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.863827 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.873674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.873724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.873737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.873755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.873767 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.882462 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.896997 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.911472 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.934440 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.949326 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.962923 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.973410 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.975851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.975883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.975894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.975908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.975919 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:28Z","lastTransitionTime":"2026-01-03T04:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.987212 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:28 crc kubenswrapper[4865]: I0103 04:17:28.999150 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:28Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.078870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.078936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.078957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.078986 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.079006 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.155804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:29 crc kubenswrapper[4865]: E0103 04:17:29.155956 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.181555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.181621 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.181641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.181665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.181685 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.283608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.283657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.283675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.283699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.284346 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.387448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.387493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.387502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.387517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.387527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.490063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.490104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.490114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.490131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.490141 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.593103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.593155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.593173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.593196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.593214 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.696610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.696675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.696698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.696728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.696750 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.717188 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/3.log" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.718333 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/2.log" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.722826 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" exitCode=1 Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.722886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.722945 4865 scope.go:117] "RemoveContainer" containerID="ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.724184 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:17:29 crc kubenswrapper[4865]: E0103 04:17:29.724572 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.742539 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.770652 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.800712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.800768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.800784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.800810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.800855 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.803564 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee425d77ced577d80e9006c8bb1a75efa3fc27a2a7c1173c63fa67b50435993e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:00Z\\\",\\\"message\\\":\\\":]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339696 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:00.339782 6499 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.204:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {78f6184b-c7cf-436d-8cbb-4b31f8af75e8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:00.339850 6499 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}\\\\nF0103 04:17:00.339861 6499 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:29Z\\\",\\\"message\\\":\\\"s)\\\\nI0103 04:17:29.143130 6893 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:17:29.143170 6893 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:29.143125 6893 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:29.143161 6893 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nF0103 04:17:29.143294 6893 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:17:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.821334 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.843181 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.865353 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.882959 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.900222 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.904188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.904242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.904261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.904284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.904303 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:29Z","lastTransitionTime":"2026-01-03T04:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.921486 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.945594 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.962156 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.977082 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:29 crc kubenswrapper[4865]: I0103 04:17:29.994798 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:29Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.007323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.007401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.007420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.007441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.007457 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.015730 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.036324 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.055571 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.073249 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.090525 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.110778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.110846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.110871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.110900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.110924 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.155595 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.155642 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.155613 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:30 crc kubenswrapper[4865]: E0103 04:17:30.155807 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:30 crc kubenswrapper[4865]: E0103 04:17:30.155926 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:30 crc kubenswrapper[4865]: E0103 04:17:30.156061 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.213048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.213108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.213131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.213164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.213189 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.316456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.316527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.316544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.316567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.316585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.420488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.420557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.420577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.420603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.420622 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.523419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.523477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.523493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.523516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.523535 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.626016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.626075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.626095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.626119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.626135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.728475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.728535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.728553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.728577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.728594 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.730477 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/3.log" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.735695 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:17:30 crc kubenswrapper[4865]: E0103 04:17:30.735955 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.757590 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.778085 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.800223 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.818165 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.831911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.831980 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.831997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.832024 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.832046 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.839992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.855469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.872812 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.892276 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.914540 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.936820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.936887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.936914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.936946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.936969 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:30Z","lastTransitionTime":"2026-01-03T04:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.937406 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.954916 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:30 crc kubenswrapper[4865]: I0103 04:17:30.973187 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:30Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.005757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:29Z\\\",\\\"message\\\":\\\"s)\\\\nI0103 04:17:29.143130 6893 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:17:29.143170 6893 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:29.143125 6893 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:29.143161 6893 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nF0103 04:17:29.143294 6893 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:17:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.027230 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.040804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.040883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.040909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.040942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.040963 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.049474 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.065605 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.081491 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.099117 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:31Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.144243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.144318 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.144336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.144363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.144409 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.155742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:31 crc kubenswrapper[4865]: E0103 04:17:31.156140 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.247540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.247839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.247994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.248162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.248286 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.351824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.351894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.351917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.351947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.351972 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.455292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.455339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.455351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.455369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.455443 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.558723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.559175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.559442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.559625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.559790 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.679840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.679905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.679922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.679946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.679964 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.782614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.782664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.782680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.782699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.782713 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.886523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.886871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.887049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.887191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.887332 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.990157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.991079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.991235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.991410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:31 crc kubenswrapper[4865]: I0103 04:17:31.991547 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:31Z","lastTransitionTime":"2026-01-03T04:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.094356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.094675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.095003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.095165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.095313 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.155340 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.155556 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:32 crc kubenswrapper[4865]: E0103 04:17:32.155745 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.155881 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:32 crc kubenswrapper[4865]: E0103 04:17:32.155973 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:32 crc kubenswrapper[4865]: E0103 04:17:32.156196 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.198966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.199443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.199648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.199816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.199979 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.304298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.304409 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.304433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.304467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.304488 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.408616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.408693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.408712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.408742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.408762 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.511342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.511404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.511418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.511436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.511448 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.613861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.614252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.614448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.614609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.614895 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.717888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.717955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.717972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.717996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.718014 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.820550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.820942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.821102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.821246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.821376 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.924130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.924189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.924203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.924225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:32 crc kubenswrapper[4865]: I0103 04:17:32.924240 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:32Z","lastTransitionTime":"2026-01-03T04:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.026865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.027370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.027424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.027455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.027477 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.131084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.131455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.131614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.131764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.131896 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.154763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:33 crc kubenswrapper[4865]: E0103 04:17:33.155126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.176547 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.197057 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.213076 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.229747 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.235075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.235126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.235143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.235168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.235184 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.246625 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.267225 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.285735 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.306935 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.325760 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.337419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.337499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.337520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.338085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.338159 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.341707 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.358628 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.374198 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.391663 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.407688 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.425202 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445468 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.445890 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.468002 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.500820 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:29Z\\\",\\\"message\\\":\\\"s)\\\\nI0103 04:17:29.143130 6893 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:17:29.143170 6893 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:29.143125 6893 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:29.143161 6893 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nF0103 04:17:29.143294 6893 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:17:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:33Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.549697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.549763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.549782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.549807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.549825 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.653291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.653342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.653362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.653423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.653450 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.756426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.756477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.756506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.756521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.756532 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.859918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.860003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.860026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.860057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.860079 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.972089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.972155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.972178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.972206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:33 crc kubenswrapper[4865]: I0103 04:17:33.972229 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:33Z","lastTransitionTime":"2026-01-03T04:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.077335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.077440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.077455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.077482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.077497 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.155130 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.155204 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:34 crc kubenswrapper[4865]: E0103 04:17:34.155433 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.155541 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:34 crc kubenswrapper[4865]: E0103 04:17:34.155671 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:34 crc kubenswrapper[4865]: E0103 04:17:34.155763 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.181019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.181090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.181112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.181143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.181166 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.283898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.283964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.283985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.284010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.284032 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.388551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.388627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.388650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.388679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.388703 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.492199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.492287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.492305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.492333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.492353 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.594778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.594846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.594863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.594890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.594910 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.698166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.698281 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.698299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.698328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.698348 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.802078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.802162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.802186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.802218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.802240 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.905251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.905311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.905337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.905366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:34 crc kubenswrapper[4865]: I0103 04:17:34.905434 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:34Z","lastTransitionTime":"2026-01-03T04:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.009244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.009362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.009420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.009459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.009482 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.111996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.112062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.112081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.112108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.112155 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.155688 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:35 crc kubenswrapper[4865]: E0103 04:17:35.155887 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.216503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.216568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.216587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.216613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.216629 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.319607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.319652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.319665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.319681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.319693 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.422567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.422645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.422672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.422701 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.422725 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.525213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.525271 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.525288 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.525311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.525328 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.628190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.628223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.628232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.628245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.628254 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.731758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.731822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.731846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.731876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.731898 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.834660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.834728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.834742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.834761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.834774 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.938186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.938267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.938291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.938321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:35 crc kubenswrapper[4865]: I0103 04:17:35.938343 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:35Z","lastTransitionTime":"2026-01-03T04:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.041453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.041502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.041513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.041531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.041542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.144079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.144140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.144158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.144180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.144197 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.155631 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.155639 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.155839 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.155989 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.156232 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.156299 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.162336 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.162494 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162558 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.162525689 +0000 UTC m=+147.279578904 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162635 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162657 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162674 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.162670 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162726 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.162710314 +0000 UTC m=+147.279763529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.162756 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162824 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162844 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162896 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.162883709 +0000 UTC m=+147.279936934 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.162919 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.1629079 +0000 UTC m=+147.279961115 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.247052 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.247107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.247123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.247147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.247168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.263677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.263827 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.263857 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.263875 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:36 crc kubenswrapper[4865]: E0103 04:17:36.263936 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.263914729 +0000 UTC m=+147.380967944 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.350942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.351010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.351027 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.351053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.351070 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.454119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.454186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.454203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.454238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.454256 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.557140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.557180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.557188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.557200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.557208 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.659618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.659669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.659686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.659725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.659737 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.762767 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.762837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.762855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.762877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.762895 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.866440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.866483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.866494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.866511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.866524 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.969610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.969670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.969689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.969713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:36 crc kubenswrapper[4865]: I0103 04:17:36.969733 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:36Z","lastTransitionTime":"2026-01-03T04:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.073076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.073149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.073170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.073201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.073224 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.155677 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:37 crc kubenswrapper[4865]: E0103 04:17:37.155875 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.176135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.176213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.176241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.176272 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.176294 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.279478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.279542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.279559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.279586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.279604 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.382737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.382804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.382823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.382849 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.382868 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.486163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.486244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.486261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.486284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.486301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.589861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.589917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.589941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.589970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.589991 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.692861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.692941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.692964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.692992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.693013 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.796420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.796483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.796500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.796525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.796544 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.899429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.899479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.899491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.899513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.899527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.985991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.986053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.986067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.986092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:37 crc kubenswrapper[4865]: I0103 04:17:37.986108 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:37Z","lastTransitionTime":"2026-01-03T04:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.004270 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.009134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.009187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.009199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.009223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.009240 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.028671 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.033490 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.033550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.033569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.033595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.033614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.058613 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.063786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.063825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.063840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.063859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.063872 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.081600 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.089108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.089143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.089153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.089168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.089179 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.102398 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e7152c83-ea61-42b3-a0b7-284415671ac6\\\",\\\"systemUUID\\\":\\\"2b92fba2-4500-48df-a5c0-af75c72ccb04\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:38Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.102520 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.103885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.103898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.103905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.103917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.103927 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.154999 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.155019 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.155069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.155115 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.155185 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:38 crc kubenswrapper[4865]: E0103 04:17:38.155327 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.206218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.206267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.206280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.206300 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.206312 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.308681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.308737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.308749 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.308765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.308779 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.411266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.411323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.411341 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.411368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.411425 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.513610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.513960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.513973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.513993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.514005 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.615948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.615992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.616003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.616018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.616030 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.719759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.719823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.719836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.719859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.719874 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.823734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.823978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.824003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.824040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.824064 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.927161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.927277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.927295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.927322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:38 crc kubenswrapper[4865]: I0103 04:17:38.927369 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:38Z","lastTransitionTime":"2026-01-03T04:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.031118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.031203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.031224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.031261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.031285 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.134841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.134931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.134949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.134979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.134998 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.154732 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:39 crc kubenswrapper[4865]: E0103 04:17:39.154949 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.237594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.237667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.237687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.237716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.237732 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.340854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.340909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.340926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.340949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.340966 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.443351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.443528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.443557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.443593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.443617 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.546232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.546290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.546307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.546332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.546350 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.648431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.648491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.648516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.648549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.648568 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.751725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.751782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.751801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.751824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.751841 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.854036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.854071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.854083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.854099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.854111 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.956895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.956965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.956981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.957008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:39 crc kubenswrapper[4865]: I0103 04:17:39.957026 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:39Z","lastTransitionTime":"2026-01-03T04:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.060000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.060059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.060076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.060098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.060113 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.155693 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.155799 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.155740 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:40 crc kubenswrapper[4865]: E0103 04:17:40.155903 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:40 crc kubenswrapper[4865]: E0103 04:17:40.156054 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:40 crc kubenswrapper[4865]: E0103 04:17:40.156181 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.162946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.163004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.163028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.163054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.163071 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.267276 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.267358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.267439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.267471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.267494 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.370301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.370372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.370429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.370462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.370489 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.473651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.473707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.473722 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.473744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.473761 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.576628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.576707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.576728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.576754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.576773 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.678914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.678948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.678958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.678971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.678979 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.785830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.785911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.785941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.785967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.785985 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.889184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.889248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.889266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.889291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.889314 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.992243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.992302 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.992323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.992351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:40 crc kubenswrapper[4865]: I0103 04:17:40.992374 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:40Z","lastTransitionTime":"2026-01-03T04:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.095493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.095572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.095589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.095613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.095632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.155111 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:41 crc kubenswrapper[4865]: E0103 04:17:41.155362 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.198139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.198176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.198187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.198203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.198215 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.301064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.301146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.301169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.301197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.301218 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.404593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.404672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.404693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.404723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.404745 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.507372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.507461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.507477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.507499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.507517 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.610044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.610116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.610139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.610169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.610189 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.712559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.712603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.712624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.712648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.712670 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.815127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.815183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.815204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.815231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.815251 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.918734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.918792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.918810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.918834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:41 crc kubenswrapper[4865]: I0103 04:17:41.918850 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:41Z","lastTransitionTime":"2026-01-03T04:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.022295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.022354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.022371 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.022427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.022446 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.125683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.125735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.125754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.125775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.125792 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.155238 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.155258 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.155334 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:42 crc kubenswrapper[4865]: E0103 04:17:42.155560 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:42 crc kubenswrapper[4865]: E0103 04:17:42.155654 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:42 crc kubenswrapper[4865]: E0103 04:17:42.155776 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.228303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.228347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.228364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.228418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.228444 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.331602 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.331661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.331682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.331707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.331727 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.434784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.434847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.434865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.434890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.434907 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.537563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.537631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.537649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.537669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.537681 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.640522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.640593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.640616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.640646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.640724 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.743218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.743263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.743275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.743291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.743301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.846999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.847044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.847055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.847072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.847083 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.949988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.950053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.950070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.950093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:42 crc kubenswrapper[4865]: I0103 04:17:42.950111 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:42Z","lastTransitionTime":"2026-01-03T04:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.052538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.052577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.052586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.052601 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.052613 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.154814 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:43 crc kubenswrapper[4865]: E0103 04:17:43.155101 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.156078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.156450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.157183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.157236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.157262 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.177997 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00909cf3-8b95-4996-a96e-fad2f10c75bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d307672e191ff47b677ad3b3948e9ae80857c3c13ccdcc40f299f8beb7d69fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a47d2e954b06fc2d77a560cee467cfc8466811e10ac66e8f7eefba6dcbbeaa2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5806efe2ffaa27e0d0eaa638a2376763b0063b74a67f97bc5accce36963dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9f58d9035c19edfc1108c655c737c09c698c9b64a2c9313e31a28f6a26b64c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.195285 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20f5ddd2-fabb-45db-83ad-9c45135ec710\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7x7xt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wb9c7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.231593 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"226b5379-0cbe-42e6-b5af-917a5e4b734d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:29Z\\\",\\\"message\\\":\\\"s)\\\\nI0103 04:17:29.143130 6893 ovnkube.go:599] Stopped ovnkube\\\\nI0103 04:17:29.143170 6893 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 04:17:29.143125 6893 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 04:17:29.143161 6893 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nF0103 04:17:29.143294 6893 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:17:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bhlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jvxfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.247204 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"978a6099-a4ad-45ab-83e8-8ad2593d2c23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e6e128445c51cbf5abbec1f1b791f8b40c9f01db0cd62f8eacdcd04df757e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bebca9bb0ac456075e9bdcf18641f43427d6f3020c5e810242010f0fe72ad04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.261870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.261916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.261926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.261941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.261952 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.268400 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec06e4a8-7a39-4921-8852-0fcb3035f15e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 04:16:31.711259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 04:16:31.711452 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 04:16:31.712340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2435461326/tls.crt::/tmp/serving-cert-2435461326/tls.key\\\\\\\"\\\\nI0103 04:16:32.124031 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 04:16:32.130472 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 04:16:32.130584 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 04:16:32.130647 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 04:16:32.130688 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 04:16:32.138076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 04:16:32.138124 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 04:16:32.138146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 04:16:32.138155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 04:16:32.138162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 04:16:32.138170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 04:16:32.138213 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 04:16:32.139734 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.285789 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139bc1753c7ff8dd551a3305b446f71c78bb564ca7ec06643a5ca6dd9137635a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.303906 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nrhl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fadcfb6-a571-4d6b-af2d-da885a478206\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T04:17:20Z\\\",\\\"message\\\":\\\"2026-01-03T04:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0\\\\n2026-01-03T04:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_097cb64d-90c3-475e-8ee8-03529ca068a0 to /host/opt/cni/bin/\\\\n2026-01-03T04:16:35Z [verbose] multus-daemon started\\\\n2026-01-03T04:16:35Z [verbose] Readiness Indicator file check\\\\n2026-01-03T04:17:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjgw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nrhl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.322275 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"122690aa-cb57-4839-8349-30c5221c5b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://708cf9ff2a180c1d615f9247978565dbfa699f51163271af324c0efc2d547d90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnl4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mh2rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.341947 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6570eea8-b60f-43b1-830a-0f6293f571b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9957e8427afc17cc52b40d23dc7cf73c765f208c832d32de1cc9b14330d2618c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dd35979e4e92e0a6515aebbde00193462d63c13f1256c06579f9336bf2678d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f85f39c13e9ea1a89264a1f16f584c8b9e12568ed7b8028f5e7a65774260ac38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e155bb90ed7ca1d55bc5e93d23f43ec5c51438658a59a814c2e33c0a57d752f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3696aa79e92707df61b802040a39dceb7c4131cf69638ff88b05ef4ea927134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769707be839839fdbb03a59916f8192d4d62efb61751d70a3266163ba9b4bc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51932e3213921db8460ffff73d4c7a20a672a142d1ee384cf0b1bc8e2e91d936\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T04:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T04:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45dhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nz8q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.361929 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.365673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.365742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.365761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.365784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.365801 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.382824 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.398882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hfsrf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e3abbf-0e02-4745-b0d2-c6995c1e1b95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e87e0450e939581c6d7d72d08ae5a3ed3f0d77f5ac189d5e4e36db8a1786794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9k6rj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hfsrf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.413607 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4m4gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad15b416-c3c0-45d7-a36d-974f798313fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18f83f877b176ac9db6f764d7f32452d33321e8dec5939e61096b8ee6a41d83a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4m4gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.430678 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4fe3f7a-8ad0-4271-a182-96a880ab89c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0c4aefe42eba57f11c744b9a4b4646b2004e1a9fc4a7e6a166543fe06619f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e277f77a5f6e50466519a78c90b8b2594f13c2d8ec55c1129d4d2dfcf5e416f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqwtw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-phsbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.451072 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ab804b5-f80b-455c-b0b8-8053ece85fd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93daccfa73bd91b876048f7622950548ff13e1dad8574ec1b7a834770ca8823a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4401bb53d87941f568d11b1e059c9878671acd182c9bd019734a1251aba2147c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca44142837452cfbaa689d893f274d9b89378b7ef30c80bdddfaf5ece008bc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T04:16:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.469155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.469262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.469284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.469310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.469329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.473416 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.495000 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4317901d24e402ecd1efe907596fd2e87b76d4106b1f6d26d6d3013063bc640d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://153aec19b7faf76971db8f91ecef258a2f672edb4dd3611e95a9ae92040cde75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.511845 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T04:16:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c53c5c10795b5a8a06acfddd5debc444993a92de56550a4c8557696fec2af4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T04:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T04:17:43Z is after 2025-08-24T17:21:41Z" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.571750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.571818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.571841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.571871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.571891 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.674522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.674577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.674594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.674618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.674636 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.778201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.778295 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.778314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.778793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.778920 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.882617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.882719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.882739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.882769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.882789 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.985481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.985563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.985586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.985622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:43 crc kubenswrapper[4865]: I0103 04:17:43.985647 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:43Z","lastTransitionTime":"2026-01-03T04:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.089181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.089239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.089256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.089280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.089298 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.154868 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.155013 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:44 crc kubenswrapper[4865]: E0103 04:17:44.155245 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.155346 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:44 crc kubenswrapper[4865]: E0103 04:17:44.155484 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:44 crc kubenswrapper[4865]: E0103 04:17:44.155632 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.156902 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:17:44 crc kubenswrapper[4865]: E0103 04:17:44.157258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.192630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.192718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.192739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.192766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.192786 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.296977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.297056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.297080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.297108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.297140 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.400842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.400910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.400928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.400958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.400976 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.504564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.504630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.504650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.504681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.504700 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.608537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.608614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.608639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.608672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.608696 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.712838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.712909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.712926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.712959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.712977 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.815512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.815580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.815603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.815630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.815650 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.920216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.920283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.920304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.920332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:44 crc kubenswrapper[4865]: I0103 04:17:44.920350 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:44Z","lastTransitionTime":"2026-01-03T04:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.024874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.024950 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.024967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.025001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.025027 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.128967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.129053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.129073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.129105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.129126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.155598 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:45 crc kubenswrapper[4865]: E0103 04:17:45.155913 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.232911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.232986 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.233006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.233039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.233062 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.336774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.336838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.336852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.336878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.336893 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.439558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.439623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.439639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.439664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.439682 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.542527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.542608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.542630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.542663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.542686 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.646125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.646191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.646211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.646238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.646254 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.749365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.749470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.749489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.749518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.749537 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.852341 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.852446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.852470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.852533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.852559 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.955693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.955740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.955752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.955769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:45 crc kubenswrapper[4865]: I0103 04:17:45.955782 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:45Z","lastTransitionTime":"2026-01-03T04:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.058979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.059034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.059050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.059072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.059091 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.155062 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.155142 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.155096 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:46 crc kubenswrapper[4865]: E0103 04:17:46.155317 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:46 crc kubenswrapper[4865]: E0103 04:17:46.155495 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:46 crc kubenswrapper[4865]: E0103 04:17:46.155623 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.162505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.162800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.162875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.162909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.162932 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.265423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.265503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.265703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.265739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.266933 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.369846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.369891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.369901 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.369917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.369928 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.472307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.472426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.472452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.472475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.472493 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.575466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.575543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.575565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.575599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.575621 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.678673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.678732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.678750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.678772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.678790 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.781468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.781505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.781516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.781533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.781543 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.884424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.884479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.884496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.884520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.884539 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.988186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.988243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.988260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.988283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:46 crc kubenswrapper[4865]: I0103 04:17:46.988301 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:46Z","lastTransitionTime":"2026-01-03T04:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.091043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.091103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.091119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.091142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.091158 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.155716 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:47 crc kubenswrapper[4865]: E0103 04:17:47.156112 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.193996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.194072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.194095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.194127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.194154 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.297064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.297127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.297144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.297168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.297185 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.400798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.400871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.400889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.400916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.400933 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.504190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.504264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.504282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.504307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.504325 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.607236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.607297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.607315 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.607426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.607446 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.710112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.710156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.710168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.710185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.710196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.813183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.813252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.813277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.813307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.813329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.916264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.916310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.916322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.916339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:47 crc kubenswrapper[4865]: I0103 04:17:47.916350 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:47Z","lastTransitionTime":"2026-01-03T04:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.018771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.018836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.018853 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.018876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.018892 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:48Z","lastTransitionTime":"2026-01-03T04:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.121434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.121515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.121538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.121564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.121581 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:48Z","lastTransitionTime":"2026-01-03T04:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.154710 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.154738 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.154715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:48 crc kubenswrapper[4865]: E0103 04:17:48.154888 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:48 crc kubenswrapper[4865]: E0103 04:17:48.155047 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:48 crc kubenswrapper[4865]: E0103 04:17:48.155202 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.224961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.225026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.225051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.225080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.225101 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:48Z","lastTransitionTime":"2026-01-03T04:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.327796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.327837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.327848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.327863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.327873 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:48Z","lastTransitionTime":"2026-01-03T04:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.368879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.368942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.368964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.368991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.369013 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T04:17:48Z","lastTransitionTime":"2026-01-03T04:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.436656 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv"] Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.437012 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.440026 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.440473 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.440964 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.441141 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.501005 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.500980647 podStartE2EDuration="43.500980647s" podCreationTimestamp="2026-01-03 04:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.475656159 +0000 UTC m=+95.592709354" watchObservedRunningTime="2026-01-03 04:17:48.500980647 +0000 UTC m=+95.618033862" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.525009 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.524981598 podStartE2EDuration="1m16.524981598s" podCreationTimestamp="2026-01-03 04:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.501897851 +0000 UTC m=+95.618951086" watchObservedRunningTime="2026-01-03 04:17:48.524981598 +0000 UTC m=+95.642034823" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.554357 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nrhl2" podStartSLOduration=75.554336803 podStartE2EDuration="1m15.554336803s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.554200369 +0000 UTC m=+95.671253604" watchObservedRunningTime="2026-01-03 04:17:48.554336803 +0000 UTC m=+95.671389988" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.574474 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podStartSLOduration=75.57444363 podStartE2EDuration="1m15.57444363s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.574252225 +0000 UTC m=+95.691305440" watchObservedRunningTime="2026-01-03 04:17:48.57444363 +0000 UTC m=+95.691496855" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.594636 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.594740 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.594779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333bae4d-16db-4361-8364-212b10653142-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.594842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/333bae4d-16db-4361-8364-212b10653142-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.594911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/333bae4d-16db-4361-8364-212b10653142-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.599183 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nz8q7" podStartSLOduration=75.599166991 podStartE2EDuration="1m15.599166991s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.59913438 +0000 UTC m=+95.716187615" watchObservedRunningTime="2026-01-03 04:17:48.599166991 +0000 UTC m=+95.716220186" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.673543 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.673519758 podStartE2EDuration="27.673519758s" podCreationTimestamp="2026-01-03 04:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.655882676 +0000 UTC m=+95.772935891" watchObservedRunningTime="2026-01-03 04:17:48.673519758 +0000 UTC m=+95.790572983" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695648 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/333bae4d-16db-4361-8364-212b10653142-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695708 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/333bae4d-16db-4361-8364-212b10653142-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695744 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695804 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333bae4d-16db-4361-8364-212b10653142-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.695912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.696023 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/333bae4d-16db-4361-8364-212b10653142-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.697189 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/333bae4d-16db-4361-8364-212b10653142-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.704175 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hfsrf" podStartSLOduration=75.704155107 podStartE2EDuration="1m15.704155107s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.68557363 +0000 UTC m=+95.802626875" watchObservedRunningTime="2026-01-03 04:17:48.704155107 +0000 UTC m=+95.821208312" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.704990 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/333bae4d-16db-4361-8364-212b10653142-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.719054 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4m4gh" podStartSLOduration=75.719029384 podStartE2EDuration="1m15.719029384s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.704982569 +0000 UTC m=+95.822035764" watchObservedRunningTime="2026-01-03 04:17:48.719029384 +0000 UTC m=+95.836082599" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.723294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/333bae4d-16db-4361-8364-212b10653142-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6sfmv\" (UID: \"333bae4d-16db-4361-8364-212b10653142\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.743031 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-phsbt" podStartSLOduration=74.743011665 podStartE2EDuration="1m14.743011665s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.721139081 +0000 UTC m=+95.838192286" watchObservedRunningTime="2026-01-03 04:17:48.743011665 +0000 UTC m=+95.860064860" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.763084 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.786041 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.786014804 podStartE2EDuration="1m11.786014804s" podCreationTimestamp="2026-01-03 04:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:48.766697668 +0000 UTC m=+95.883750893" watchObservedRunningTime="2026-01-03 04:17:48.786014804 +0000 UTC m=+95.903068009" Jan 03 04:17:48 crc kubenswrapper[4865]: W0103 04:17:48.787147 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333bae4d_16db_4361_8364_212b10653142.slice/crio-b335055b1d8a4b0e4311cdfa71f1da5e0bd50ec3074d45dcff2449bc076682a4 WatchSource:0}: Error finding container b335055b1d8a4b0e4311cdfa71f1da5e0bd50ec3074d45dcff2449bc076682a4: Status 404 returned error can't find the container with id b335055b1d8a4b0e4311cdfa71f1da5e0bd50ec3074d45dcff2449bc076682a4 Jan 03 04:17:48 crc kubenswrapper[4865]: I0103 04:17:48.799874 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" event={"ID":"333bae4d-16db-4361-8364-212b10653142","Type":"ContainerStarted","Data":"b335055b1d8a4b0e4311cdfa71f1da5e0bd50ec3074d45dcff2449bc076682a4"} Jan 03 04:17:49 crc kubenswrapper[4865]: I0103 04:17:49.154822 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:49 crc kubenswrapper[4865]: E0103 04:17:49.155294 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:49 crc kubenswrapper[4865]: I0103 04:17:49.804680 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" event={"ID":"333bae4d-16db-4361-8364-212b10653142","Type":"ContainerStarted","Data":"d19386181e54a3f4c1d3f0df57ce5084d03fc861e901af0f9b09d0189fbbd102"} Jan 03 04:17:49 crc kubenswrapper[4865]: I0103 04:17:49.830866 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6sfmv" podStartSLOduration=76.830839519 podStartE2EDuration="1m16.830839519s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:49.829948146 +0000 UTC m=+96.947001361" watchObservedRunningTime="2026-01-03 04:17:49.830839519 +0000 UTC m=+96.947892734" Jan 03 04:17:50 crc kubenswrapper[4865]: I0103 04:17:50.154794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:50 crc kubenswrapper[4865]: E0103 04:17:50.154915 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:50 crc kubenswrapper[4865]: I0103 04:17:50.154996 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:50 crc kubenswrapper[4865]: I0103 04:17:50.155005 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:50 crc kubenswrapper[4865]: E0103 04:17:50.155366 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:50 crc kubenswrapper[4865]: E0103 04:17:50.155525 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:50 crc kubenswrapper[4865]: I0103 04:17:50.172124 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 03 04:17:51 crc kubenswrapper[4865]: I0103 04:17:51.155235 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:51 crc kubenswrapper[4865]: E0103 04:17:51.155464 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:51 crc kubenswrapper[4865]: I0103 04:17:51.827252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:51 crc kubenswrapper[4865]: E0103 04:17:51.827545 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:51 crc kubenswrapper[4865]: E0103 04:17:51.827660 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs podName:20f5ddd2-fabb-45db-83ad-9c45135ec710 nodeName:}" failed. No retries permitted until 2026-01-03 04:18:55.827630167 +0000 UTC m=+162.944683392 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs") pod "network-metrics-daemon-wb9c7" (UID: "20f5ddd2-fabb-45db-83ad-9c45135ec710") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 04:17:52 crc kubenswrapper[4865]: I0103 04:17:52.155357 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:52 crc kubenswrapper[4865]: I0103 04:17:52.155357 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:52 crc kubenswrapper[4865]: I0103 04:17:52.155637 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:52 crc kubenswrapper[4865]: E0103 04:17:52.155508 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:52 crc kubenswrapper[4865]: E0103 04:17:52.155750 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:52 crc kubenswrapper[4865]: E0103 04:17:52.155820 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:53 crc kubenswrapper[4865]: I0103 04:17:53.155364 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:53 crc kubenswrapper[4865]: E0103 04:17:53.157926 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:53 crc kubenswrapper[4865]: I0103 04:17:53.191007 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.190977755 podStartE2EDuration="3.190977755s" podCreationTimestamp="2026-01-03 04:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:17:53.190526913 +0000 UTC m=+100.307580128" watchObservedRunningTime="2026-01-03 04:17:53.190977755 +0000 UTC m=+100.308030980" Jan 03 04:17:54 crc kubenswrapper[4865]: I0103 04:17:54.155320 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:54 crc kubenswrapper[4865]: I0103 04:17:54.155361 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:54 crc kubenswrapper[4865]: I0103 04:17:54.155457 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:54 crc kubenswrapper[4865]: E0103 04:17:54.155573 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:54 crc kubenswrapper[4865]: E0103 04:17:54.155779 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:54 crc kubenswrapper[4865]: E0103 04:17:54.156080 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:55 crc kubenswrapper[4865]: I0103 04:17:55.155620 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:55 crc kubenswrapper[4865]: E0103 04:17:55.155860 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:56 crc kubenswrapper[4865]: I0103 04:17:56.155561 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:56 crc kubenswrapper[4865]: I0103 04:17:56.155739 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:56 crc kubenswrapper[4865]: E0103 04:17:56.155838 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:56 crc kubenswrapper[4865]: I0103 04:17:56.155908 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:56 crc kubenswrapper[4865]: E0103 04:17:56.156035 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:56 crc kubenswrapper[4865]: E0103 04:17:56.156165 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:56 crc kubenswrapper[4865]: I0103 04:17:56.157207 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:17:56 crc kubenswrapper[4865]: E0103 04:17:56.157501 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:17:57 crc kubenswrapper[4865]: I0103 04:17:57.155538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:57 crc kubenswrapper[4865]: E0103 04:17:57.155746 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:17:58 crc kubenswrapper[4865]: I0103 04:17:58.155015 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:17:58 crc kubenswrapper[4865]: I0103 04:17:58.155064 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:17:58 crc kubenswrapper[4865]: I0103 04:17:58.155092 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:17:58 crc kubenswrapper[4865]: E0103 04:17:58.155216 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:17:58 crc kubenswrapper[4865]: E0103 04:17:58.155338 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:17:58 crc kubenswrapper[4865]: E0103 04:17:58.155557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:17:59 crc kubenswrapper[4865]: I0103 04:17:59.155227 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:17:59 crc kubenswrapper[4865]: E0103 04:17:59.156489 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:00 crc kubenswrapper[4865]: I0103 04:18:00.155215 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:00 crc kubenswrapper[4865]: I0103 04:18:00.155287 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:00 crc kubenswrapper[4865]: I0103 04:18:00.155229 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:00 crc kubenswrapper[4865]: E0103 04:18:00.155351 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:00 crc kubenswrapper[4865]: E0103 04:18:00.155541 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:00 crc kubenswrapper[4865]: E0103 04:18:00.155562 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:01 crc kubenswrapper[4865]: I0103 04:18:01.155634 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:01 crc kubenswrapper[4865]: E0103 04:18:01.155815 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:02 crc kubenswrapper[4865]: I0103 04:18:02.155664 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:02 crc kubenswrapper[4865]: I0103 04:18:02.155692 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:02 crc kubenswrapper[4865]: I0103 04:18:02.155709 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:02 crc kubenswrapper[4865]: E0103 04:18:02.156729 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:02 crc kubenswrapper[4865]: E0103 04:18:02.157258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:02 crc kubenswrapper[4865]: E0103 04:18:02.157708 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:03 crc kubenswrapper[4865]: I0103 04:18:03.155111 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:03 crc kubenswrapper[4865]: E0103 04:18:03.156968 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:04 crc kubenswrapper[4865]: I0103 04:18:04.155156 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:04 crc kubenswrapper[4865]: I0103 04:18:04.155207 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:04 crc kubenswrapper[4865]: E0103 04:18:04.155367 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:04 crc kubenswrapper[4865]: I0103 04:18:04.155451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:04 crc kubenswrapper[4865]: E0103 04:18:04.155630 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:04 crc kubenswrapper[4865]: E0103 04:18:04.155811 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:05 crc kubenswrapper[4865]: I0103 04:18:05.155503 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:05 crc kubenswrapper[4865]: E0103 04:18:05.156034 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:06 crc kubenswrapper[4865]: I0103 04:18:06.155196 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:06 crc kubenswrapper[4865]: I0103 04:18:06.155335 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:06 crc kubenswrapper[4865]: E0103 04:18:06.155624 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:06 crc kubenswrapper[4865]: I0103 04:18:06.155721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:06 crc kubenswrapper[4865]: E0103 04:18:06.155881 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:06 crc kubenswrapper[4865]: E0103 04:18:06.156041 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.155002 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:07 crc kubenswrapper[4865]: E0103 04:18:07.155194 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.876668 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/1.log" Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.877460 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/0.log" Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.877527 4865 generic.go:334] "Generic (PLEG): container finished" podID="2fadcfb6-a571-4d6b-af2d-da885a478206" containerID="141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4" exitCode=1 Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.877589 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerDied","Data":"141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4"} Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.877690 4865 scope.go:117] "RemoveContainer" containerID="45ed6cae4fc12b29169bbad43f4fe2f06dbfad869c7a2a221b303ca2a5b55192" Jan 03 04:18:07 crc kubenswrapper[4865]: I0103 04:18:07.878067 4865 scope.go:117] "RemoveContainer" containerID="141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4" Jan 03 04:18:07 crc kubenswrapper[4865]: E0103 04:18:07.878368 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nrhl2_openshift-multus(2fadcfb6-a571-4d6b-af2d-da885a478206)\"" pod="openshift-multus/multus-nrhl2" podUID="2fadcfb6-a571-4d6b-af2d-da885a478206" Jan 03 04:18:08 crc kubenswrapper[4865]: I0103 04:18:08.155699 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:08 crc kubenswrapper[4865]: I0103 04:18:08.155741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:08 crc kubenswrapper[4865]: I0103 04:18:08.156008 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:08 crc kubenswrapper[4865]: E0103 04:18:08.156110 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:08 crc kubenswrapper[4865]: E0103 04:18:08.156259 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:08 crc kubenswrapper[4865]: E0103 04:18:08.156372 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:08 crc kubenswrapper[4865]: I0103 04:18:08.157558 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:18:08 crc kubenswrapper[4865]: E0103 04:18:08.157861 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jvxfl_openshift-ovn-kubernetes(226b5379-0cbe-42e6-b5af-917a5e4b734d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" Jan 03 04:18:08 crc kubenswrapper[4865]: I0103 04:18:08.882507 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/1.log" Jan 03 04:18:09 crc kubenswrapper[4865]: I0103 04:18:09.155122 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:09 crc kubenswrapper[4865]: E0103 04:18:09.155314 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:10 crc kubenswrapper[4865]: I0103 04:18:10.155532 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:10 crc kubenswrapper[4865]: I0103 04:18:10.155626 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:10 crc kubenswrapper[4865]: E0103 04:18:10.155690 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:10 crc kubenswrapper[4865]: I0103 04:18:10.155705 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:10 crc kubenswrapper[4865]: E0103 04:18:10.155827 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:10 crc kubenswrapper[4865]: E0103 04:18:10.155954 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:11 crc kubenswrapper[4865]: I0103 04:18:11.155526 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:11 crc kubenswrapper[4865]: E0103 04:18:11.155692 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:12 crc kubenswrapper[4865]: I0103 04:18:12.154688 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:12 crc kubenswrapper[4865]: I0103 04:18:12.154707 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:12 crc kubenswrapper[4865]: I0103 04:18:12.154838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:12 crc kubenswrapper[4865]: E0103 04:18:12.155003 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:12 crc kubenswrapper[4865]: E0103 04:18:12.155120 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:12 crc kubenswrapper[4865]: E0103 04:18:12.155286 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:13 crc kubenswrapper[4865]: I0103 04:18:13.155412 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:13 crc kubenswrapper[4865]: E0103 04:18:13.156433 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:13 crc kubenswrapper[4865]: E0103 04:18:13.182826 4865 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 03 04:18:13 crc kubenswrapper[4865]: E0103 04:18:13.306963 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 04:18:14 crc kubenswrapper[4865]: I0103 04:18:14.155374 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:14 crc kubenswrapper[4865]: I0103 04:18:14.155449 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:14 crc kubenswrapper[4865]: E0103 04:18:14.155554 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:14 crc kubenswrapper[4865]: I0103 04:18:14.155591 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:14 crc kubenswrapper[4865]: E0103 04:18:14.155876 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:14 crc kubenswrapper[4865]: E0103 04:18:14.156002 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:15 crc kubenswrapper[4865]: I0103 04:18:15.155135 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:15 crc kubenswrapper[4865]: E0103 04:18:15.155547 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:16 crc kubenswrapper[4865]: I0103 04:18:16.155029 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:16 crc kubenswrapper[4865]: I0103 04:18:16.155093 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:16 crc kubenswrapper[4865]: E0103 04:18:16.155189 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:16 crc kubenswrapper[4865]: I0103 04:18:16.155093 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:16 crc kubenswrapper[4865]: E0103 04:18:16.155254 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:16 crc kubenswrapper[4865]: E0103 04:18:16.155449 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:17 crc kubenswrapper[4865]: I0103 04:18:17.155472 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:17 crc kubenswrapper[4865]: E0103 04:18:17.155662 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:18 crc kubenswrapper[4865]: I0103 04:18:18.155104 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:18 crc kubenswrapper[4865]: I0103 04:18:18.155134 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:18 crc kubenswrapper[4865]: I0103 04:18:18.155262 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:18 crc kubenswrapper[4865]: E0103 04:18:18.155502 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:18 crc kubenswrapper[4865]: E0103 04:18:18.155608 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:18 crc kubenswrapper[4865]: E0103 04:18:18.155740 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:18 crc kubenswrapper[4865]: E0103 04:18:18.308281 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 04:18:19 crc kubenswrapper[4865]: I0103 04:18:19.155119 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:19 crc kubenswrapper[4865]: E0103 04:18:19.155327 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.155696 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.155798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.155821 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:20 crc kubenswrapper[4865]: E0103 04:18:20.156374 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:20 crc kubenswrapper[4865]: E0103 04:18:20.156587 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:20 crc kubenswrapper[4865]: E0103 04:18:20.156727 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.156958 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.925640 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/3.log" Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.929479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerStarted","Data":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} Jan 03 04:18:20 crc kubenswrapper[4865]: I0103 04:18:20.930071 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:18:21 crc kubenswrapper[4865]: I0103 04:18:21.082253 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podStartSLOduration=108.08221969 podStartE2EDuration="1m48.08221969s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:20.962646955 +0000 UTC m=+128.079700150" watchObservedRunningTime="2026-01-03 04:18:21.08221969 +0000 UTC m=+128.199272915" Jan 03 04:18:21 crc kubenswrapper[4865]: I0103 04:18:21.083733 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wb9c7"] Jan 03 04:18:21 crc kubenswrapper[4865]: I0103 04:18:21.083886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:21 crc kubenswrapper[4865]: E0103 04:18:21.084081 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:22 crc kubenswrapper[4865]: I0103 04:18:22.155004 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:22 crc kubenswrapper[4865]: I0103 04:18:22.155039 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:22 crc kubenswrapper[4865]: I0103 04:18:22.155057 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:22 crc kubenswrapper[4865]: I0103 04:18:22.155333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:22 crc kubenswrapper[4865]: E0103 04:18:22.155310 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:22 crc kubenswrapper[4865]: E0103 04:18:22.155456 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:22 crc kubenswrapper[4865]: E0103 04:18:22.155563 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:22 crc kubenswrapper[4865]: E0103 04:18:22.155644 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:22 crc kubenswrapper[4865]: I0103 04:18:22.156103 4865 scope.go:117] "RemoveContainer" containerID="141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4" Jan 03 04:18:23 crc kubenswrapper[4865]: E0103 04:18:23.309003 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 04:18:23 crc kubenswrapper[4865]: I0103 04:18:23.944547 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/1.log" Jan 03 04:18:23 crc kubenswrapper[4865]: I0103 04:18:23.944625 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerStarted","Data":"5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30"} Jan 03 04:18:24 crc kubenswrapper[4865]: I0103 04:18:24.155475 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:24 crc kubenswrapper[4865]: I0103 04:18:24.155481 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:24 crc kubenswrapper[4865]: E0103 04:18:24.156211 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:24 crc kubenswrapper[4865]: I0103 04:18:24.155543 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:24 crc kubenswrapper[4865]: I0103 04:18:24.155518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:24 crc kubenswrapper[4865]: E0103 04:18:24.156376 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:24 crc kubenswrapper[4865]: E0103 04:18:24.156517 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:24 crc kubenswrapper[4865]: E0103 04:18:24.156622 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:26 crc kubenswrapper[4865]: I0103 04:18:26.155460 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:26 crc kubenswrapper[4865]: I0103 04:18:26.155512 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:26 crc kubenswrapper[4865]: E0103 04:18:26.155658 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:26 crc kubenswrapper[4865]: I0103 04:18:26.155948 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:26 crc kubenswrapper[4865]: E0103 04:18:26.156057 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:26 crc kubenswrapper[4865]: I0103 04:18:26.156223 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:26 crc kubenswrapper[4865]: E0103 04:18:26.156307 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:26 crc kubenswrapper[4865]: E0103 04:18:26.156549 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:28 crc kubenswrapper[4865]: I0103 04:18:28.155601 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:28 crc kubenswrapper[4865]: E0103 04:18:28.155789 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 04:18:28 crc kubenswrapper[4865]: I0103 04:18:28.155854 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:28 crc kubenswrapper[4865]: I0103 04:18:28.155912 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:28 crc kubenswrapper[4865]: I0103 04:18:28.155617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:28 crc kubenswrapper[4865]: E0103 04:18:28.156033 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 04:18:28 crc kubenswrapper[4865]: E0103 04:18:28.156196 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wb9c7" podUID="20f5ddd2-fabb-45db-83ad-9c45135ec710" Jan 03 04:18:28 crc kubenswrapper[4865]: E0103 04:18:28.156358 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.621798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.674488 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fn8hc"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.675305 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: W0103 04:18:29.679111 4865 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 03 04:18:29 crc kubenswrapper[4865]: E0103 04:18:29.679200 4865 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.679300 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.681757 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.682441 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.686062 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.686108 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.686666 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.686773 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.695180 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.698991 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.699044 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.705376 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.706657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.707176 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.705410 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.710919 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.711048 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.710936 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.720949 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.723587 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.723612 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.723864 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724057 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724212 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724528 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724681 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724827 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.724967 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.725032 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.725370 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.726004 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.727059 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.727432 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.728166 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.728366 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fp86k"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.729118 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.728448 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.729076 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.730003 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.733763 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sqb86"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.734139 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s7cr9"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.734570 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.734737 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.730518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.735269 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.730643 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.735765 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.732147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.736073 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.730647 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.736697 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.736735 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.739021 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zjjj8"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.739606 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.739776 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k6pzr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.740712 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.742232 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.742752 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.743051 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.743497 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744509 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-image-import-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-serving-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744575 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit-dir\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744599 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-node-pullsecrets\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-encryption-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvb2\" (UniqueName: \"kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-serving-cert\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-client\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.744798 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.746456 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.747249 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.750401 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wvrlk"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.751023 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.753261 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.753746 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2v6xt"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.754276 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.754597 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.754872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.757425 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.767768 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fn8hc"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.769339 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.770511 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.771610 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.772145 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.772526 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.772568 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.772922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.773621 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.773791 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.773876 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.774293 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.775014 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.777947 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.778863 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.779821 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.779912 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.779851 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.780071 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.780147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.781547 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.781753 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.781765 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.781877 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.782159 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.782263 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.790922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.791189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.791372 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.792258 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.793619 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.793835 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.793869 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.794053 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.794203 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.794302 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.794450 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.794546 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.809723 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.810066 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.810194 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.810866 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.810967 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811229 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811304 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811457 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811642 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811721 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811859 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.811974 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.812220 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.812523 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.814188 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.814642 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.815159 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.815395 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.815611 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.815775 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816406 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816527 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816784 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816878 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816950 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.816998 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.817028 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.817091 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.817215 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.817343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.818343 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.826913 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.827483 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.827619 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.829082 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.832423 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.837274 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.837970 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.839044 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.843322 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.843812 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.844099 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.844394 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845312 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-trusted-ca\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845352 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-client\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845370 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845397 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmw7\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-kube-api-access-xhmw7\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845419 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-config\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845436 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee330cbc-666a-47ad-ae86-7b424349001b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-serving-cert\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845479 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319feef6-a979-45be-8a1c-22c1a5cf42a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845495 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee330cbc-666a-47ad-ae86-7b424349001b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845512 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-client\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845593 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc329d2e-4de6-4290-901b-f4bdd0259fd0-serving-cert\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845623 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-image-import-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845649 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845675 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-serving-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845691 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit-dir\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845706 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvb2\" (UniqueName: \"kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845722 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-node-pullsecrets\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845755 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-encryption-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845772 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845790 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-config\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845809 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-service-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845870 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-client\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845891 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-dir\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845934 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-policies\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845957 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wp2\" (UniqueName: \"kubernetes.io/projected/b05c89ba-dd9d-4272-8f12-b0edd96985bb-kube-api-access-z9wp2\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845976 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc559\" (UniqueName: \"kubernetes.io/projected/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-kube-api-access-bc559\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.845993 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4g97\" (UniqueName: \"kubernetes.io/projected/319feef6-a979-45be-8a1c-22c1a5cf42a2-kube-api-access-t4g97\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846014 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-serving-cert\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846034 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846053 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319feef6-a979-45be-8a1c-22c1a5cf42a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njjk\" (UniqueName: \"kubernetes.io/projected/dc329d2e-4de6-4290-901b-f4bdd0259fd0-kube-api-access-6njjk\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846097 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-serving-cert\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846118 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-encryption-config\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846645 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.846970 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-image-import-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847000 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-serving-ca\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847052 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit-dir\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-audit\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847633 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847730 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847772 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.847866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-node-pullsecrets\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.852125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-encryption-config\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.852935 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-serving-cert\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.853346 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-etcd-client\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.853879 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.854015 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.855216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.856562 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.858179 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.858959 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.859266 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.859477 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.859729 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.860058 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.867198 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.870237 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.870935 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.872259 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwzdz"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.873177 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.873403 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.874892 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q9btt"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.875649 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.875921 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.877612 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.879341 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.889624 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.890235 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.892174 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.892721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.892876 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hjzq4"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.893239 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.895031 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.895753 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.895851 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.897603 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.898010 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fp86k"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.898048 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.898542 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s7cr9"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.902069 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k6pzr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.903176 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.904140 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.906432 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w8m6t"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.907109 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.907623 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.908910 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wvrlk"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.910211 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zck7q"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.910749 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.911976 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.913104 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.914100 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.915531 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w8m6t"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.917209 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.918004 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.919810 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.920808 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.921785 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zjjj8"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.924621 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2v6xt"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.926704 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.928214 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.931064 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.933007 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sqb86"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.934412 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.936284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.937507 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.940044 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.940205 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zck7q"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.941586 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.947513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njjk\" (UniqueName: \"kubernetes.io/projected/dc329d2e-4de6-4290-901b-f4bdd0259fd0-kube-api-access-6njjk\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.947565 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319feef6-a979-45be-8a1c-22c1a5cf42a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948020 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-encryption-config\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-client\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhmw7\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-kube-api-access-xhmw7\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948084 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-trusted-ca\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948113 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee330cbc-666a-47ad-ae86-7b424349001b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-config\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948149 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319feef6-a979-45be-8a1c-22c1a5cf42a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948166 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee330cbc-666a-47ad-ae86-7b424349001b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-serving-cert\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948198 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948216 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc329d2e-4de6-4290-901b-f4bdd0259fd0-serving-cert\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948354 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319feef6-a979-45be-8a1c-22c1a5cf42a2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948230 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948470 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-config\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-service-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948553 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-client\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948609 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-dir\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-dir\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.948997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-policies\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949061 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949130 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949300 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wp2\" (UniqueName: \"kubernetes.io/projected/b05c89ba-dd9d-4272-8f12-b0edd96985bb-kube-api-access-z9wp2\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4g97\" (UniqueName: \"kubernetes.io/projected/319feef6-a979-45be-8a1c-22c1a5cf42a2-kube-api-access-t4g97\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc559\" (UniqueName: \"kubernetes.io/projected/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-kube-api-access-bc559\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.949393 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-serving-cert\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.950932 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b05c89ba-dd9d-4272-8f12-b0edd96985bb-audit-policies\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.951554 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-trusted-ca\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.951895 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-encryption-config\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.951934 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwzdz"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.951968 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.952121 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-service-ca\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.952504 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-etcd-client\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.952559 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc329d2e-4de6-4290-901b-f4bdd0259fd0-config\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.953126 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee330cbc-666a-47ad-ae86-7b424349001b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.953477 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-config\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.953889 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.953899 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-serving-cert\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.954470 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee330cbc-666a-47ad-ae86-7b424349001b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.954478 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc329d2e-4de6-4290-901b-f4bdd0259fd0-serving-cert\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.954647 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-etcd-client\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.955511 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/319feef6-a979-45be-8a1c-22c1a5cf42a2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.955556 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.957201 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.957199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05c89ba-dd9d-4272-8f12-b0edd96985bb-serving-cert\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.957729 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.958896 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.963411 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.965013 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q9btt"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.966510 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-f5f7l"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.967239 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.967908 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrzvr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.969050 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.969132 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrzvr"] Jan 03 04:18:29 crc kubenswrapper[4865]: I0103 04:18:29.977968 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:29.998667 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.018422 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.037409 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.057693 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.077937 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.097865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.138083 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.155147 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.155221 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.155147 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.155165 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.157536 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.177731 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.198767 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.217239 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.253331 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.257319 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.278440 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.298834 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.318792 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.338732 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.358219 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.379245 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.398776 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.418411 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.458059 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.478369 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.497798 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.517799 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.538808 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.557913 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.578167 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.597985 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.638918 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.657926 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.678808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.699435 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.719249 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.738370 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.758212 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.777547 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.798723 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.817831 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.838434 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.858545 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.876742 4865 request.go:700] Waited for 1.016461706s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0 Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.888549 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.899138 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.918202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.937728 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.958002 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.978092 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 04:18:30 crc kubenswrapper[4865]: I0103 04:18:30.998733 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.017564 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.037810 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.058453 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.078137 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.099359 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.118864 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.138821 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.158840 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.177745 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.198207 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.217865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.239226 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.258879 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.279130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.298999 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.317862 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.339287 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.357810 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.378896 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.398629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.418359 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.441042 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.458818 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.477779 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.498600 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.517809 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.538952 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.539741 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.558118 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.578200 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.598166 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.617652 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: E0103 04:18:31.640744 4865 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 03 04:18:31 crc kubenswrapper[4865]: E0103 04:18:31.640778 4865 projected.go:194] Error preparing data for projected volume kube-api-access-nbvb2 for pod openshift-apiserver/apiserver-76f77b778f-fn8hc: failed to sync configmap cache: timed out waiting for the condition Jan 03 04:18:31 crc kubenswrapper[4865]: E0103 04:18:31.640856 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2 podName:35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.140831989 +0000 UTC m=+139.257885214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nbvb2" (UniqueName: "kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2") pod "apiserver-76f77b778f-fn8hc" (UID: "35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb") : failed to sync configmap cache: timed out waiting for the condition Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.665278 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njjk\" (UniqueName: \"kubernetes.io/projected/dc329d2e-4de6-4290-901b-f4bdd0259fd0-kube-api-access-6njjk\") pod \"console-operator-58897d9998-wvrlk\" (UID: \"dc329d2e-4de6-4290-901b-f4bdd0259fd0\") " pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.683647 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhmw7\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-kube-api-access-xhmw7\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.707514 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4g97\" (UniqueName: \"kubernetes.io/projected/319feef6-a979-45be-8a1c-22c1a5cf42a2-kube-api-access-t4g97\") pod \"openshift-apiserver-operator-796bbdcf4f-9464r\" (UID: \"319feef6-a979-45be-8a1c-22c1a5cf42a2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.726074 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wp2\" (UniqueName: \"kubernetes.io/projected/b05c89ba-dd9d-4272-8f12-b0edd96985bb-kube-api-access-z9wp2\") pod \"apiserver-7bbb656c7d-q4w96\" (UID: \"b05c89ba-dd9d-4272-8f12-b0edd96985bb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.732768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.758918 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.759826 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc559\" (UniqueName: \"kubernetes.io/projected/c35a4ff8-8072-4e8d-a088-f84c96c40f7b-kube-api-access-bc559\") pod \"etcd-operator-b45778765-k6pzr\" (UID: \"c35a4ff8-8072-4e8d-a088-f84c96c40f7b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.768046 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee330cbc-666a-47ad-ae86-7b424349001b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zsgpf\" (UID: \"ee330cbc-666a-47ad-ae86-7b424349001b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.775806 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.778437 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.800143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.819002 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.838087 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.861630 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.876861 4865 request.go:700] Waited for 1.75855209s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/persistentvolumes/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.900200 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.902301 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.916961 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.919009 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.942541 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.958843 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.959367 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973286 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973331 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af7bc8fa-5059-413a-b03b-8a95d39f786c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-metrics-tls\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973412 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6611402f-7bf5-4147-a8ef-d96f052cd572-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6611402f-7bf5-4147-a8ef-d96f052cd572-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973479 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf0fe541-6d83-4c12-b88f-adca678e9ed4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973584 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973602 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973683 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973703 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973721 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fbd343a-070a-4b55-b3f9-37114883bbbb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973797 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973836 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdkx7\" (UniqueName: \"kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973877 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9t9\" (UniqueName: \"kubernetes.io/projected/3fbd343a-070a-4b55-b3f9-37114883bbbb-kube-api-access-wj9t9\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973899 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d565a7-6d27-46e7-83a2-49034797be22-serving-cert\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973917 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nv78\" (UniqueName: \"kubernetes.io/projected/6611402f-7bf5-4147-a8ef-d96f052cd572-kube-api-access-7nv78\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chb5l\" (UniqueName: \"kubernetes.io/projected/af7bc8fa-5059-413a-b03b-8a95d39f786c-kube-api-access-chb5l\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.973973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8r2\" (UniqueName: \"kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974025 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4f5\" (UniqueName: \"kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974066 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974085 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974123 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974205 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x77f\" (UniqueName: \"kubernetes.io/projected/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-kube-api-access-4x77f\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974226 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6kph\" (UniqueName: \"kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-config\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974282 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974321 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792qm\" (UniqueName: \"kubernetes.io/projected/bf0fe541-6d83-4c12-b88f-adca678e9ed4-kube-api-access-792qm\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974342 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559q4\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-images\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974451 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhqt\" (UniqueName: \"kubernetes.io/projected/49ce7309-7ce7-4325-be8d-fbf7f19b1fcf-kube-api-access-hkhqt\") pod \"downloads-7954f5f757-zjjj8\" (UID: \"49ce7309-7ce7-4325-be8d-fbf7f19b1fcf\") " pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974471 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-config\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974491 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-auth-proxy-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd343a-070a-4b55-b3f9-37114883bbbb-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974535 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974554 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-service-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974608 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974649 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974669 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sqs4\" (UniqueName: \"kubernetes.io/projected/79d565a7-6d27-46e7-83a2-49034797be22-kube-api-access-7sqs4\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974707 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-machine-approver-tls\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzhj\" (UniqueName: \"kubernetes.io/projected/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-kube-api-access-tmzhj\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.974745 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:31 crc kubenswrapper[4865]: E0103 04:18:31.976197 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.476165351 +0000 UTC m=+139.593218616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.979162 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 04:18:31 crc kubenswrapper[4865]: I0103 04:18:31.999159 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.032126 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-wvrlk"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.038193 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 04:18:32 crc kubenswrapper[4865]: W0103 04:18:32.042947 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc329d2e_4de6_4290_901b_f4bdd0259fd0.slice/crio-de7184c6b65b1c098916c45c2bca864faa3f8de871fd994963b3448677c292af WatchSource:0}: Error finding container de7184c6b65b1c098916c45c2bca864faa3f8de871fd994963b3448677c292af: Status 404 returned error can't find the container with id de7184c6b65b1c098916c45c2bca864faa3f8de871fd994963b3448677c292af Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075189 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8r67\" (UniqueName: \"kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075455 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbh6\" (UniqueName: \"kubernetes.io/projected/61dddc61-7785-4656-833d-21b330e2910b-kube-api-access-ffbh6\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075477 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjm6\" (UniqueName: \"kubernetes.io/projected/a8e6d994-0d40-413e-a9b1-26d5fb968747-kube-api-access-6fjm6\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5p7\" (UniqueName: \"kubernetes.io/projected/8c095fb6-dc75-4da1-9361-b1601d6130ac-kube-api-access-2q5p7\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075521 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhc7\" (UniqueName: \"kubernetes.io/projected/8e1fb9b3-cd30-4419-9872-ec29cccb5957-kube-api-access-7hhc7\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075544 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6kph\" (UniqueName: \"kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075566 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-webhook-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075602 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1fb9b3-cd30-4419-9872-ec29cccb5957-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075627 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st67s\" (UniqueName: \"kubernetes.io/projected/29b1ddd4-b577-45ed-b89e-6dfda5975433-kube-api-access-st67s\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/747fa886-3b49-458c-adb4-e3aa11361545-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075694 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a8e6d994-0d40-413e-a9b1-26d5fb968747-metrics-tls\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075716 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-default-certificate\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075762 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-stats-auth\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/725589e5-e586-4ffc-b0f9-e58c09aa64e6-kube-api-access-9nqxr\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-node-bootstrap-token\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075838 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-559q4\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.075867 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.575839924 +0000 UTC m=+139.692893109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-images\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075958 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14816cfb-cafa-40c7-96f1-b87d310e9264-tmpfs\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.075983 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd343a-070a-4b55-b3f9-37114883bbbb-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61dddc61-7785-4656-833d-21b330e2910b-proxy-tls\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076019 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25320d8e-60c1-4c82-b281-558f7db73000-config\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7494a6a5-3c0d-43c8-bfca-00e55103eae5-config\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076053 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmml\" (UniqueName: \"kubernetes.io/projected/472b09fa-6397-442e-bd28-40d3dc0aff44-kube-api-access-pwmml\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-config\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-auth-proxy-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-mountpoint-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076123 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthx7\" (UniqueName: \"kubernetes.io/projected/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-kube-api-access-qthx7\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076868 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076927 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5hv\" (UniqueName: \"kubernetes.io/projected/064b364b-0a0b-4be8-98bf-880b71ff717e-kube-api-access-wf5hv\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076962 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-certs\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.076989 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077069 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmzhj\" (UniqueName: \"kubernetes.io/projected/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-kube-api-access-tmzhj\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077119 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-machine-approver-tls\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077164 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af7bc8fa-5059-413a-b03b-8a95d39f786c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-config\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-metrics-tls\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077227 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.077631 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpr8\" (UniqueName: \"kubernetes.io/projected/06d641b8-c3b3-4817-aff8-b0c24d75df64-kube-api-access-ghpr8\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078423 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078451 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078470 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-serving-cert\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad0b2803-dc7b-4687-b446-39f1e8645d4a-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078508 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078571 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078667 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fbd343a-070a-4b55-b3f9-37114883bbbb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078720 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdkx7\" (UniqueName: \"kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078838 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9t9\" (UniqueName: \"kubernetes.io/projected/3fbd343a-070a-4b55-b3f9-37114883bbbb-kube-api-access-wj9t9\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8r2\" (UniqueName: \"kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/06d641b8-c3b3-4817-aff8-b0c24d75df64-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29b1ddd4-b577-45ed-b89e-6dfda5975433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.078943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079001 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4f5\" (UniqueName: \"kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079169 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75bb18cc-0cb4-4018-9d99-111bbfecb29a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-config\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079220 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-auth-proxy-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079237 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079259 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079299 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-srv-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x77f\" (UniqueName: \"kubernetes.io/projected/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-kube-api-access-4x77f\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/af7bc8fa-5059-413a-b03b-8a95d39f786c-images\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-csi-data-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079609 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ms2\" (UniqueName: \"kubernetes.io/projected/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-kube-api-access-n2ms2\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079637 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-config\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079665 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079897 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079957 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.079991 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25320d8e-60c1-4c82-b281-558f7db73000-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080017 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-key\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080040 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-metrics-certs\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080062 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/747fa886-3b49-458c-adb4-e3aa11361545-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080104 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792qm\" (UniqueName: \"kubernetes.io/projected/bf0fe541-6d83-4c12-b88f-adca678e9ed4-kube-api-access-792qm\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080130 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080153 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhqt\" (UniqueName: \"kubernetes.io/projected/49ce7309-7ce7-4325-be8d-fbf7f19b1fcf-kube-api-access-hkhqt\") pod \"downloads-7954f5f757-zjjj8\" (UID: \"49ce7309-7ce7-4325-be8d-fbf7f19b1fcf\") " pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080268 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-registration-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080289 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc74\" (UniqueName: \"kubernetes.io/projected/14816cfb-cafa-40c7-96f1-b87d310e9264-kube-api-access-vfc74\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.080310 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7494a6a5-3c0d-43c8-bfca-00e55103eae5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.081177 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.081842 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.082191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fbd343a-070a-4b55-b3f9-37114883bbbb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.084333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-machine-approver-tls\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.084542 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd343a-070a-4b55-b3f9-37114883bbbb-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.084592 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.085033 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.085835 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-metrics-tls\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.085999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.087182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-config\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.087943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088191 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgrl\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-kube-api-access-hrgrl\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088220 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-socket-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088267 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qsd\" (UniqueName: \"kubernetes.io/projected/747fa886-3b49-458c-adb4-e3aa11361545-kube-api-access-r6qsd\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088275 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-config\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088291 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088319 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088346 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bpf\" (UniqueName: \"kubernetes.io/projected/a9154ae2-7dc4-4ce8-bbda-11d33395e8ff-kube-api-access-79bpf\") pod \"migrator-59844c95c7-8lxcr\" (UID: \"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088812 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088901 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-service-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.088971 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.089115 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.089160 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sqs4\" (UniqueName: \"kubernetes.io/projected/79d565a7-6d27-46e7-83a2-49034797be22-kube-api-access-7sqs4\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.089440 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.589423958 +0000 UTC m=+139.706477143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.089564 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79d565a7-6d27-46e7-83a2-49034797be22-service-ca-bundle\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090290 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-plugins-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090320 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25320d8e-60c1-4c82-b281-558f7db73000-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090407 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75bb18cc-0cb4-4018-9d99-111bbfecb29a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfpz\" (UniqueName: \"kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8e6d994-0d40-413e-a9b1-26d5fb968747-config-volume\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6611402f-7bf5-4147-a8ef-d96f052cd572-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6611402f-7bf5-4147-a8ef-d96f052cd572-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090616 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-apiservice-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090641 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/472b09fa-6397-442e-bd28-40d3dc0aff44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090668 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090756 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgk95\" (UniqueName: \"kubernetes.io/projected/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-kube-api-access-lgk95\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090778 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-images\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090804 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf0fe541-6d83-4c12-b88f-adca678e9ed4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090834 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/064b364b-0a0b-4be8-98bf-880b71ff717e-cert\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090860 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.091226 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.091738 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.092352 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6611402f-7bf5-4147-a8ef-d96f052cd572-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.092744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/af7bc8fa-5059-413a-b03b-8a95d39f786c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.090971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.093675 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.093948 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.093996 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-cabundle\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.094023 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.094057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.094082 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.094130 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7494a6a5-3c0d-43c8-bfca-00e55103eae5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095157 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095201 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75bb18cc-0cb4-4018-9d99-111bbfecb29a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095224 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095257 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095325 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095347 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d565a7-6d27-46e7-83a2-49034797be22-serving-cert\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nv78\" (UniqueName: \"kubernetes.io/projected/6611402f-7bf5-4147-a8ef-d96f052cd572-kube-api-access-7nv78\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095436 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldbt\" (UniqueName: \"kubernetes.io/projected/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-kube-api-access-gldbt\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-srv-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.095473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1fb9b3-cd30-4419-9872-ec29cccb5957-proxy-tls\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.096465 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.096646 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad0b2803-dc7b-4687-b446-39f1e8645d4a-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.096671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chb5l\" (UniqueName: \"kubernetes.io/projected/af7bc8fa-5059-413a-b03b-8a95d39f786c-kube-api-access-chb5l\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.096713 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.097759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.097898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.097929 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf0fe541-6d83-4c12-b88f-adca678e9ed4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.098264 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6611402f-7bf5-4147-a8ef-d96f052cd572-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.098323 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.098344 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-service-ca-bundle\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.099620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.100209 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chcng\" (UniqueName: \"kubernetes.io/projected/992bc434-e67e-4e47-a1aa-4fcf37043ad7-kube-api-access-chcng\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.100550 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.100940 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79d565a7-6d27-46e7-83a2-49034797be22-serving-cert\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.101080 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k6pzr"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.102442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.103160 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: W0103 04:18:32.103354 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35a4ff8_8072_4e8d_a088_f84c96c40f7b.slice/crio-ff98b507b04efcc353ce9a3ccd425a4b77ec254b56cee9fb718b4d8292bd9373 WatchSource:0}: Error finding container ff98b507b04efcc353ce9a3ccd425a4b77ec254b56cee9fb718b4d8292bd9373: Status 404 returned error can't find the container with id ff98b507b04efcc353ce9a3ccd425a4b77ec254b56cee9fb718b4d8292bd9373 Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.104039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.104731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.105188 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.106520 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.111238 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.111698 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-559q4\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.115272 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.132154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6kph\" (UniqueName: \"kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph\") pod \"console-f9d7485db-cgxlq\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.152737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.176523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmzhj\" (UniqueName: \"kubernetes.io/projected/f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5-kube-api-access-tmzhj\") pod \"machine-approver-56656f9798-d28sx\" (UID: \"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.184927 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.190844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdkx7\" (UniqueName: \"kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7\") pod \"oauth-openshift-558db77b4-sqb86\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.196215 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.200972 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.201132 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.701086172 +0000 UTC m=+139.818139357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201172 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201234 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7494a6a5-3c0d-43c8-bfca-00e55103eae5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201280 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75bb18cc-0cb4-4018-9d99-111bbfecb29a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gldbt\" (UniqueName: \"kubernetes.io/projected/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-kube-api-access-gldbt\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-srv-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201416 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1fb9b3-cd30-4419-9872-ec29cccb5957-proxy-tls\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201505 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad0b2803-dc7b-4687-b446-39f1e8645d4a-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201546 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-service-ca-bundle\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201580 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201599 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chcng\" (UniqueName: \"kubernetes.io/projected/992bc434-e67e-4e47-a1aa-4fcf37043ad7-kube-api-access-chcng\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201620 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8r67\" (UniqueName: \"kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffbh6\" (UniqueName: \"kubernetes.io/projected/61dddc61-7785-4656-833d-21b330e2910b-kube-api-access-ffbh6\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjm6\" (UniqueName: \"kubernetes.io/projected/a8e6d994-0d40-413e-a9b1-26d5fb968747-kube-api-access-6fjm6\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5p7\" (UniqueName: \"kubernetes.io/projected/8c095fb6-dc75-4da1-9361-b1601d6130ac-kube-api-access-2q5p7\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-webhook-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201742 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1fb9b3-cd30-4419-9872-ec29cccb5957-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201762 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhc7\" (UniqueName: \"kubernetes.io/projected/8e1fb9b3-cd30-4419-9872-ec29cccb5957-kube-api-access-7hhc7\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201780 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st67s\" (UniqueName: \"kubernetes.io/projected/29b1ddd4-b577-45ed-b89e-6dfda5975433-kube-api-access-st67s\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-default-certificate\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-stats-auth\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/747fa886-3b49-458c-adb4-e3aa11361545-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a8e6d994-0d40-413e-a9b1-26d5fb968747-metrics-tls\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201907 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/725589e5-e586-4ffc-b0f9-e58c09aa64e6-kube-api-access-9nqxr\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201923 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-node-bootstrap-token\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201939 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14816cfb-cafa-40c7-96f1-b87d310e9264-tmpfs\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201958 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvb2\" (UniqueName: \"kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.201995 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7494a6a5-3c0d-43c8-bfca-00e55103eae5-config\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202011 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmml\" (UniqueName: \"kubernetes.io/projected/472b09fa-6397-442e-bd28-40d3dc0aff44-kube-api-access-pwmml\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61dddc61-7785-4656-833d-21b330e2910b-proxy-tls\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202068 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25320d8e-60c1-4c82-b281-558f7db73000-config\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202086 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-mountpoint-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthx7\" (UniqueName: \"kubernetes.io/projected/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-kube-api-access-qthx7\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202117 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5hv\" (UniqueName: \"kubernetes.io/projected/064b364b-0a0b-4be8-98bf-880b71ff717e-kube-api-access-wf5hv\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-certs\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202221 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpr8\" (UniqueName: \"kubernetes.io/projected/06d641b8-c3b3-4817-aff8-b0c24d75df64-kube-api-access-ghpr8\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202271 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202308 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-serving-cert\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202326 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad0b2803-dc7b-4687-b446-39f1e8645d4a-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202397 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/06d641b8-c3b3-4817-aff8-b0c24d75df64-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202416 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29b1ddd4-b577-45ed-b89e-6dfda5975433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202437 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-config\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202492 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75bb18cc-0cb4-4018-9d99-111bbfecb29a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-srv-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-csi-data-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ms2\" (UniqueName: \"kubernetes.io/projected/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-kube-api-access-n2ms2\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202600 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25320d8e-60c1-4c82-b281-558f7db73000-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-key\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-metrics-certs\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/747fa886-3b49-458c-adb4-e3aa11361545-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202734 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-registration-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202810 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc74\" (UniqueName: \"kubernetes.io/projected/14816cfb-cafa-40c7-96f1-b87d310e9264-kube-api-access-vfc74\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202826 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7494a6a5-3c0d-43c8-bfca-00e55103eae5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-socket-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qsd\" (UniqueName: \"kubernetes.io/projected/747fa886-3b49-458c-adb4-e3aa11361545-kube-api-access-r6qsd\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202914 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202953 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrgrl\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-kube-api-access-hrgrl\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.202971 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bpf\" (UniqueName: \"kubernetes.io/projected/a9154ae2-7dc4-4ce8-bbda-11d33395e8ff-kube-api-access-79bpf\") pod \"migrator-59844c95c7-8lxcr\" (UID: \"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203059 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-plugins-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203074 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25320d8e-60c1-4c82-b281-558f7db73000-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75bb18cc-0cb4-4018-9d99-111bbfecb29a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203141 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfpz\" (UniqueName: \"kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8e6d994-0d40-413e-a9b1-26d5fb968747-config-volume\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203173 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-apiservice-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203193 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/472b09fa-6397-442e-bd28-40d3dc0aff44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgk95\" (UniqueName: \"kubernetes.io/projected/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-kube-api-access-lgk95\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-images\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-cabundle\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203281 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/064b364b-0a0b-4be8-98bf-880b71ff717e-cert\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.203891 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-csi-data-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.204876 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75bb18cc-0cb4-4018-9d99-111bbfecb29a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.204970 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-service-ca-bundle\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.205241 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.206434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1fb9b3-cd30-4419-9872-ec29cccb5957-proxy-tls\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.206768 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/747fa886-3b49-458c-adb4-e3aa11361545-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.207612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.207625 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-srv-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.207622 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.707599596 +0000 UTC m=+139.824652821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.207968 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad0b2803-dc7b-4687-b446-39f1e8645d4a-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.208067 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7494a6a5-3c0d-43c8-bfca-00e55103eae5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.209039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-socket-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.209329 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14816cfb-cafa-40c7-96f1-b87d310e9264-tmpfs\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.209959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7494a6a5-3c0d-43c8-bfca-00e55103eae5-config\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.210067 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/747fa886-3b49-458c-adb4-e3aa11361545-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.210438 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25320d8e-60c1-4c82-b281-558f7db73000-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.210761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.211606 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8e6d994-0d40-413e-a9b1-26d5fb968747-config-volume\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.212877 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.212976 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-key\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.213049 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29b1ddd4-b577-45ed-b89e-6dfda5975433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.213228 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-config\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.213502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad0b2803-dc7b-4687-b446-39f1e8645d4a-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.213744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-node-bootstrap-token\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.213926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-default-certificate\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-registration-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214335 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/06d641b8-c3b3-4817-aff8-b0c24d75df64-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214358 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/064b364b-0a0b-4be8-98bf-880b71ff717e-cert\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214614 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-srv-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-mountpoint-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214822 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1fb9b3-cd30-4419-9872-ec29cccb5957-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214940 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-webhook-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.214988 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvb2\" (UniqueName: \"kubernetes.io/projected/35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb-kube-api-access-nbvb2\") pod \"apiserver-76f77b778f-fn8hc\" (UID: \"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb\") " pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215355 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/472b09fa-6397-442e-bd28-40d3dc0aff44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14816cfb-cafa-40c7-96f1-b87d310e9264-apiservice-cert\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215661 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-serving-cert\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215818 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61dddc61-7785-4656-833d-21b330e2910b-proxy-tls\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215890 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/725589e5-e586-4ffc-b0f9-e58c09aa64e6-signing-cabundle\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215905 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75bb18cc-0cb4-4018-9d99-111bbfecb29a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.215960 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-plugins-dir\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.216730 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/992bc434-e67e-4e47-a1aa-4fcf37043ad7-certs\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.216860 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.217008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/61dddc61-7785-4656-833d-21b330e2910b-images\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.217655 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a8e6d994-0d40-413e-a9b1-26d5fb968747-metrics-tls\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.217991 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-stats-auth\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.218117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-metrics-certs\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.218490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25320d8e-60c1-4c82-b281-558f7db73000-config\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.219407 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4f5\" (UniqueName: \"kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5\") pod \"route-controller-manager-6576b87f9c-dbw7m\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.219894 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.220924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c095fb6-dc75-4da1-9361-b1601d6130ac-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.232904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8r2\" (UniqueName: \"kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2\") pod \"controller-manager-879f6c89f-742r2\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.248676 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.258540 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792qm\" (UniqueName: \"kubernetes.io/projected/bf0fe541-6d83-4c12-b88f-adca678e9ed4-kube-api-access-792qm\") pod \"cluster-samples-operator-665b6dd947-96l9g\" (UID: \"bf0fe541-6d83-4c12-b88f-adca678e9ed4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.280468 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.281867 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9t9\" (UniqueName: \"kubernetes.io/projected/3fbd343a-070a-4b55-b3f9-37114883bbbb-kube-api-access-wj9t9\") pod \"openshift-config-operator-7777fb866f-p8hk7\" (UID: \"3fbd343a-070a-4b55-b3f9-37114883bbbb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.291284 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhqt\" (UniqueName: \"kubernetes.io/projected/49ce7309-7ce7-4325-be8d-fbf7f19b1fcf-kube-api-access-hkhqt\") pod \"downloads-7954f5f757-zjjj8\" (UID: \"49ce7309-7ce7-4325-be8d-fbf7f19b1fcf\") " pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.291556 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" Jan 03 04:18:32 crc kubenswrapper[4865]: W0103 04:18:32.302802 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e020ef_8a7a_44b9_8ea3_f4dd49b1a5d5.slice/crio-367739c46e0a0fdc345befc3468d248ae8e4afbfd891a0c7a347503dc461ec93 WatchSource:0}: Error finding container 367739c46e0a0fdc345befc3468d248ae8e4afbfd891a0c7a347503dc461ec93: Status 404 returned error can't find the container with id 367739c46e0a0fdc345befc3468d248ae8e4afbfd891a0c7a347503dc461ec93 Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.303770 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.303910 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.803889269 +0000 UTC m=+139.920942454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.304367 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.304736 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.804724302 +0000 UTC m=+139.921777487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.305936 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.311151 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x77f\" (UniqueName: \"kubernetes.io/projected/c2f1a3e5-c54e-40ae-8640-e7b964e2669b-kube-api-access-4x77f\") pod \"dns-operator-744455d44c-2v6xt\" (UID: \"c2f1a3e5-c54e-40ae-8640-e7b964e2669b\") " pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.320006 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.329934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.339542 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.340667 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sqs4\" (UniqueName: \"kubernetes.io/projected/79d565a7-6d27-46e7-83a2-49034797be22-kube-api-access-7sqs4\") pod \"authentication-operator-69f744f599-fp86k\" (UID: \"79d565a7-6d27-46e7-83a2-49034797be22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.367884 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.380869 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nv78\" (UniqueName: \"kubernetes.io/projected/6611402f-7bf5-4147-a8ef-d96f052cd572-kube-api-access-7nv78\") pod \"openshift-controller-manager-operator-756b6f6bc6-lclwv\" (UID: \"6611402f-7bf5-4147-a8ef-d96f052cd572\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.404323 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chb5l\" (UniqueName: \"kubernetes.io/projected/af7bc8fa-5059-413a-b03b-8a95d39f786c-kube-api-access-chb5l\") pod \"machine-api-operator-5694c8668f-s7cr9\" (UID: \"af7bc8fa-5059-413a-b03b-8a95d39f786c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.404994 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.405312 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:32.90529946 +0000 UTC m=+140.022352645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.420432 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7494a6a5-3c0d-43c8-bfca-00e55103eae5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rnlzq\" (UID: \"7494a6a5-3c0d-43c8-bfca-00e55103eae5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.424426 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.430794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.438331 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qsd\" (UniqueName: \"kubernetes.io/projected/747fa886-3b49-458c-adb4-e3aa11361545-kube-api-access-r6qsd\") pod \"kube-storage-version-migrator-operator-b67b599dd-78b64\" (UID: \"747fa886-3b49-458c-adb4-e3aa11361545\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.443598 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.456014 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chcng\" (UniqueName: \"kubernetes.io/projected/992bc434-e67e-4e47-a1aa-4fcf37043ad7-kube-api-access-chcng\") pod \"machine-config-server-f5f7l\" (UID: \"992bc434-e67e-4e47-a1aa-4fcf37043ad7\") " pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.473376 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sqb86"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.475211 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gldbt\" (UniqueName: \"kubernetes.io/projected/7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0-kube-api-access-gldbt\") pod \"service-ca-operator-777779d784-fmh9q\" (UID: \"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.475564 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.494216 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ms2\" (UniqueName: \"kubernetes.io/projected/8abcadd4-2cc5-4337-9f6f-4e462d9e25e4-kube-api-access-n2ms2\") pod \"catalog-operator-68c6474976-zc4xp\" (UID: \"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.506178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.506566 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.006550726 +0000 UTC m=+140.123603911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.514988 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75bb18cc-0cb4-4018-9d99-111bbfecb29a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xtz6v\" (UID: \"75bb18cc-0cb4-4018-9d99-111bbfecb29a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.526159 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.527315 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.540997 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.541954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc74\" (UniqueName: \"kubernetes.io/projected/14816cfb-cafa-40c7-96f1-b87d310e9264-kube-api-access-vfc74\") pod \"packageserver-d55dfcdfc-mfpfj\" (UID: \"14816cfb-cafa-40c7-96f1-b87d310e9264\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.543123 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zjjj8"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.544085 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.552601 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f5f7l" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.563528 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqxr\" (UniqueName: \"kubernetes.io/projected/725589e5-e586-4ffc-b0f9-e58c09aa64e6-kube-api-access-9nqxr\") pod \"service-ca-9c57cc56f-pwzdz\" (UID: \"725589e5-e586-4ffc-b0f9-e58c09aa64e6\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.600205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bpf\" (UniqueName: \"kubernetes.io/projected/a9154ae2-7dc4-4ce8-bbda-11d33395e8ff-kube-api-access-79bpf\") pod \"migrator-59844c95c7-8lxcr\" (UID: \"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.600751 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.607827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.608302 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.108284315 +0000 UTC m=+140.225337500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.609617 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8r67\" (UniqueName: \"kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67\") pod \"collect-profiles-29456895-5vk8g\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.612186 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.646962 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffbh6\" (UniqueName: \"kubernetes.io/projected/61dddc61-7785-4656-833d-21b330e2910b-kube-api-access-ffbh6\") pod \"machine-config-operator-74547568cd-cmfx6\" (UID: \"61dddc61-7785-4656-833d-21b330e2910b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.651509 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.670634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthx7\" (UniqueName: \"kubernetes.io/projected/886e1b37-a8e5-44bd-a8ba-ac2b09a5d936-kube-api-access-qthx7\") pod \"csi-hostpathplugin-rrzvr\" (UID: \"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936\") " pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.675668 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjm6\" (UniqueName: \"kubernetes.io/projected/a8e6d994-0d40-413e-a9b1-26d5fb968747-kube-api-access-6fjm6\") pod \"dns-default-w8m6t\" (UID: \"a8e6d994-0d40-413e-a9b1-26d5fb968747\") " pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.677239 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpr8\" (UniqueName: \"kubernetes.io/projected/06d641b8-c3b3-4817-aff8-b0c24d75df64-kube-api-access-ghpr8\") pod \"package-server-manager-789f6589d5-ffh4h\" (UID: \"06d641b8-c3b3-4817-aff8-b0c24d75df64\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.692596 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgk95\" (UniqueName: \"kubernetes.io/projected/5a4248d3-b647-41fa-9d18-b0ca99fd4cbc-kube-api-access-lgk95\") pod \"router-default-5444994796-hjzq4\" (UID: \"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc\") " pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.709609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.709967 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.209950382 +0000 UTC m=+140.327003567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.715632 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.717834 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmml\" (UniqueName: \"kubernetes.io/projected/472b09fa-6397-442e-bd28-40d3dc0aff44-kube-api-access-pwmml\") pod \"control-plane-machine-set-operator-78cbb6b69f-mftcp\" (UID: \"472b09fa-6397-442e-bd28-40d3dc0aff44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: W0103 04:18:32.717937 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6e3a03_5fe2_4e65_a77e_da971c2dd666.slice/crio-1b32df479cfe8b2412d819a68204930dc57188e1687e9c5a9aaf3891cfe69b56 WatchSource:0}: Error finding container 1b32df479cfe8b2412d819a68204930dc57188e1687e9c5a9aaf3891cfe69b56: Status 404 returned error can't find the container with id 1b32df479cfe8b2412d819a68204930dc57188e1687e9c5a9aaf3891cfe69b56 Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.732621 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.736160 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5hv\" (UniqueName: \"kubernetes.io/projected/064b364b-0a0b-4be8-98bf-880b71ff717e-kube-api-access-wf5hv\") pod \"ingress-canary-zck7q\" (UID: \"064b364b-0a0b-4be8-98bf-880b71ff717e\") " pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.739633 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.747020 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.753223 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrgrl\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-kube-api-access-hrgrl\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.755496 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.776279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5p7\" (UniqueName: \"kubernetes.io/projected/8c095fb6-dc75-4da1-9361-b1601d6130ac-kube-api-access-2q5p7\") pod \"olm-operator-6b444d44fb-xjxbl\" (UID: \"8c095fb6-dc75-4da1-9361-b1601d6130ac\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.783475 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.793751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.795301 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfpz\" (UniqueName: \"kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz\") pod \"marketplace-operator-79b997595-8cgc7\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.797568 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.805220 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" Jan 03 04:18:32 crc kubenswrapper[4865]: W0103 04:18:32.808812 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992bc434_e67e_4e47_a1aa_4fcf37043ad7.slice/crio-41cfc87fc64daff8cd4eb621268b70ec9f5144ba49abde5af60619b170e1d049 WatchSource:0}: Error finding container 41cfc87fc64daff8cd4eb621268b70ec9f5144ba49abde5af60619b170e1d049: Status 404 returned error can't find the container with id 41cfc87fc64daff8cd4eb621268b70ec9f5144ba49abde5af60619b170e1d049 Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.811434 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.811726 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.311713582 +0000 UTC m=+140.428766767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.812035 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.813459 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad0b2803-dc7b-4687-b446-39f1e8645d4a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5h97\" (UID: \"ad0b2803-dc7b-4687-b446-39f1e8645d4a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.819579 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.834235 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st67s\" (UniqueName: \"kubernetes.io/projected/29b1ddd4-b577-45ed-b89e-6dfda5975433-kube-api-access-st67s\") pod \"multus-admission-controller-857f4d67dd-q9btt\" (UID: \"29b1ddd4-b577-45ed-b89e-6dfda5975433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.834496 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.846892 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zck7q" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.850892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25320d8e-60c1-4c82-b281-558f7db73000-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ll6mp\" (UID: \"25320d8e-60c1-4c82-b281-558f7db73000\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.857804 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.875061 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.893137 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhc7\" (UniqueName: \"kubernetes.io/projected/8e1fb9b3-cd30-4419-9872-ec29cccb5957-kube-api-access-7hhc7\") pod \"machine-config-controller-84d6567774-qb4sc\" (UID: \"8e1fb9b3-cd30-4419-9872-ec29cccb5957\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.904213 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fn8hc"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.915520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:32 crc kubenswrapper[4865]: E0103 04:18:32.916264 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.416248245 +0000 UTC m=+140.533301430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.932169 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2v6xt"] Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.988616 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" Jan 03 04:18:32 crc kubenswrapper[4865]: I0103 04:18:32.989174 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.000179 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" event={"ID":"ab146386-688e-4e0b-acf7-ee0d9c087d25","Type":"ContainerStarted","Data":"b8a09cd79b5ecc43264e18ff7850d75e79197be294bf85d3f9e582fbb553df05"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.000460 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.014646 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zjjj8" event={"ID":"49ce7309-7ce7-4325-be8d-fbf7f19b1fcf","Type":"ContainerStarted","Data":"b8a14a9070aa4c2b0877434f38c7728227e19dc60aa9b0237648aebd16dba883"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.016725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.017011 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.516995718 +0000 UTC m=+140.634048903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.027985 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" event={"ID":"7494a6a5-3c0d-43c8-bfca-00e55103eae5","Type":"ContainerStarted","Data":"70cc64365864bd1dfec8c3889597ad188328ea0dcc5d4aa5d67db5df3b1900d5"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.061582 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.072222 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.089959 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.091235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgxlq" event={"ID":"7a6e3a03-5fe2-4e65-a77e-da971c2dd666","Type":"ContainerStarted","Data":"1b32df479cfe8b2412d819a68204930dc57188e1687e9c5a9aaf3891cfe69b56"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.114849 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" event={"ID":"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5","Type":"ContainerStarted","Data":"35f65160e6e57f491a7327ae8e4648032081e55b102a316ad2de92c699c1eb6a"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.114885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" event={"ID":"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5","Type":"ContainerStarted","Data":"367739c46e0a0fdc345befc3468d248ae8e4afbfd891a0c7a347503dc461ec93"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.119486 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.119799 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.619788346 +0000 UTC m=+140.736841531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.128426 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" event={"ID":"c35a4ff8-8072-4e8d-a088-f84c96c40f7b","Type":"ContainerStarted","Data":"0cf1b59771d664d2d2d0c8bc26f24495cfb2c3f2b189f0b8be14355010a2c716"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.128470 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" event={"ID":"c35a4ff8-8072-4e8d-a088-f84c96c40f7b","Type":"ContainerStarted","Data":"ff98b507b04efcc353ce9a3ccd425a4b77ec254b56cee9fb718b4d8292bd9373"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.143291 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" event={"ID":"dc329d2e-4de6-4290-901b-f4bdd0259fd0","Type":"ContainerStarted","Data":"96f7d3c509b15fce20e6ba2e1c71535424dfb19c1bcefe50043f3a4438f63b7d"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.143335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" event={"ID":"dc329d2e-4de6-4290-901b-f4bdd0259fd0","Type":"ContainerStarted","Data":"de7184c6b65b1c098916c45c2bca864faa3f8de871fd994963b3448677c292af"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.143759 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.151163 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64"] Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.151563 4865 generic.go:334] "Generic (PLEG): container finished" podID="b05c89ba-dd9d-4272-8f12-b0edd96985bb" containerID="9e51bb14071bf69b7b32e38653b11ab8dab69c5094dd96659b782279c45b6e5d" exitCode=0 Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.151634 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" event={"ID":"b05c89ba-dd9d-4272-8f12-b0edd96985bb","Type":"ContainerDied","Data":"9e51bb14071bf69b7b32e38653b11ab8dab69c5094dd96659b782279c45b6e5d"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.151656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" event={"ID":"b05c89ba-dd9d-4272-8f12-b0edd96985bb","Type":"ContainerStarted","Data":"3bdf6ca6066c15ca2df314d9be1f21680ce6e730eea97a640449b30ae4ed4cf5"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.183725 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dbw7m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.183769 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" podUID="da462e42-c060-4423-8e89-ded4d08f2868" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.197936 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" event={"ID":"319feef6-a979-45be-8a1c-22c1a5cf42a2","Type":"ContainerStarted","Data":"f073fc6290cf2599e9d783206a95409a817bbe159dd8716ed3111c535445d7b2"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.197988 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198002 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198016 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" event={"ID":"319feef6-a979-45be-8a1c-22c1a5cf42a2","Type":"ContainerStarted","Data":"f32ad636fc08be687ac320366dac9c63611951982dbbb3963d2d0bdc4afd7bb5"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198034 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f5f7l" event={"ID":"992bc434-e67e-4e47-a1aa-4fcf37043ad7","Type":"ContainerStarted","Data":"41cfc87fc64daff8cd4eb621268b70ec9f5144ba49abde5af60619b170e1d049"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" event={"ID":"da462e42-c060-4423-8e89-ded4d08f2868","Type":"ContainerStarted","Data":"2a96d58ff062d4c23f469a3ac60f024b25e483846a61e098e47d03133aff65ee"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.198053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" event={"ID":"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb","Type":"ContainerStarted","Data":"09496967c81022956cbd174e4c246099b16e152ca24fd449016a09cc7a002b64"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.207229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" event={"ID":"ee330cbc-666a-47ad-ae86-7b424349001b","Type":"ContainerStarted","Data":"19bac9bbc66706230202b828d039c8b3ce858989735b9c556d8e4b966dae859e"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.207283 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" event={"ID":"ee330cbc-666a-47ad-ae86-7b424349001b","Type":"ContainerStarted","Data":"12a4cdde7adb22ecca7ad3307833431d1716cfa426eaed4888514865f86f13c2"} Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.221550 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.223356 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.723336734 +0000 UTC m=+140.840389919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.323400 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.324075 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.824057226 +0000 UTC m=+140.941110411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.424919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.425080 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.925054725 +0000 UTC m=+141.042107910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.425351 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.425627 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:33.92562038 +0000 UTC m=+141.042673565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.528835 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.529059 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.029046134 +0000 UTC m=+141.146099309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.632984 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.633750 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.133734912 +0000 UTC m=+141.250788097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.716215 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7"] Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.735552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.735897 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.235881462 +0000 UTC m=+141.352934647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.839521 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.840574 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.34055787 +0000 UTC m=+141.457611055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: W0103 04:18:33.899304 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fbd343a_070a_4b55_b3f9_37114883bbbb.slice/crio-88445152baa8faeb1dea1dd8703478bf2a7add428864def897094af635abe14f WatchSource:0}: Error finding container 88445152baa8faeb1dea1dd8703478bf2a7add428864def897094af635abe14f: Status 404 returned error can't find the container with id 88445152baa8faeb1dea1dd8703478bf2a7add428864def897094af635abe14f Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.943043 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.943259 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.443229004 +0000 UTC m=+141.560282189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:33 crc kubenswrapper[4865]: I0103 04:18:33.943583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:33 crc kubenswrapper[4865]: E0103 04:18:33.943974 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.443960123 +0000 UTC m=+141.561013308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.047710 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.048347 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.548332093 +0000 UTC m=+141.665385278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.060985 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.061204 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.095597 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.149894 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.150277 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.650261457 +0000 UTC m=+141.767314642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.210277 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zsgpf" podStartSLOduration=121.210252801 podStartE2EDuration="2m1.210252801s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.208488913 +0000 UTC m=+141.325542098" watchObservedRunningTime="2026-01-03 04:18:34.210252801 +0000 UTC m=+141.327305986" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.214534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgxlq" event={"ID":"7a6e3a03-5fe2-4e65-a77e-da971c2dd666","Type":"ContainerStarted","Data":"d51e1818eaa1c8c9cacdb1621a4a17c6ab2f5444c77125a501ed003a5e875b21"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.218705 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hjzq4" event={"ID":"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc","Type":"ContainerStarted","Data":"95a152ea2117d9e16f443bf31b34de3d99e6e97aff2fec78295edb333a2fa6de"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.218748 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hjzq4" event={"ID":"5a4248d3-b647-41fa-9d18-b0ca99fd4cbc","Type":"ContainerStarted","Data":"53b820a9298025e90be19f6788c56c8c73117be391659781d369f94fc4eb9927"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.219269 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fp86k"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.221115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zjjj8" event={"ID":"49ce7309-7ce7-4325-be8d-fbf7f19b1fcf","Type":"ContainerStarted","Data":"a56008a9860a8baddaf6cfd0df907b3af79df8c632a4890f13e5066378ad6db5"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.227113 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.227164 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s7cr9"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.228476 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-zjjj8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.228522 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zjjj8" podUID="49ce7309-7ce7-4325-be8d-fbf7f19b1fcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.231942 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" event={"ID":"747fa886-3b49-458c-adb4-e3aa11361545","Type":"ContainerStarted","Data":"eba4e449580d18ef16bf2dfd9b3f1fa338f7de21a518ab1f7b63bc083ecfa857"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.231977 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" event={"ID":"747fa886-3b49-458c-adb4-e3aa11361545","Type":"ContainerStarted","Data":"7a88377fe8da580d1c24bf832c653f742e8b72118f5c00e713bd081d25da6ad3"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.238529 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.240835 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" event={"ID":"3fbd343a-070a-4b55-b3f9-37114883bbbb","Type":"ContainerStarted","Data":"88445152baa8faeb1dea1dd8703478bf2a7add428864def897094af635abe14f"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.243481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" event={"ID":"da462e42-c060-4423-8e89-ded4d08f2868","Type":"ContainerStarted","Data":"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.250476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.251429 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.751352769 +0000 UTC m=+141.868406004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.256443 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f5f7l" event={"ID":"992bc434-e67e-4e47-a1aa-4fcf37043ad7","Type":"ContainerStarted","Data":"fd2d5a8c24a6eee4c57e96a20888bfdc2a181e0331d8940f30235f2c1caf0815"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.262354 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.265996 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" event={"ID":"ab146386-688e-4e0b-acf7-ee0d9c087d25","Type":"ContainerStarted","Data":"5d22cb4d6a03a5e2226c7c0f790b1667a67ceace4b6b1661b51c62f4907ee641"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.266319 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.267672 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" event={"ID":"c2f1a3e5-c54e-40ae-8640-e7b964e2669b","Type":"ContainerStarted","Data":"06e8a5f18fd499566bea43f4f278d486af13bd0acda1e4b285a0ae30131d0d7b"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.270661 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" event={"ID":"31d35f74-419c-478e-a1e0-232ad73e7084","Type":"ContainerStarted","Data":"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.270688 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" event={"ID":"31d35f74-419c-478e-a1e0-232ad73e7084","Type":"ContainerStarted","Data":"43d6e1aa642c32bcfc2adc68114c180e1dd0122afbde4deb78a57a44a62dfeac"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.270703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.274467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" event={"ID":"bf0fe541-6d83-4c12-b88f-adca678e9ed4","Type":"ContainerStarted","Data":"66f594f780e9d62f78f7d1da0a2a7921a697687242a03bf95902ff26dc33596a"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.274532 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" event={"ID":"bf0fe541-6d83-4c12-b88f-adca678e9ed4","Type":"ContainerStarted","Data":"016e151c2ac1c81c4b2e638d749882ca9c3cfcd8046e67b1d34f601df5c63bf9"} Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.276831 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.296983 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9464r" podStartSLOduration=121.296961598 podStartE2EDuration="2m1.296961598s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.292638412 +0000 UTC m=+141.409691617" watchObservedRunningTime="2026-01-03 04:18:34.296961598 +0000 UTC m=+141.414014783" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.353454 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.360306 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.860288611 +0000 UTC m=+141.977341796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.460899 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.461199 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:34.961183227 +0000 UTC m=+142.078236412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.466506 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" podStartSLOduration=120.466490959 podStartE2EDuration="2m0.466490959s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.450808939 +0000 UTC m=+141.567862124" watchObservedRunningTime="2026-01-03 04:18:34.466490959 +0000 UTC m=+141.583544144" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.564872 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.565994 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.065949327 +0000 UTC m=+142.183032413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.611021 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k6pzr" podStartSLOduration=121.610996631 podStartE2EDuration="2m1.610996631s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.610241701 +0000 UTC m=+141.727294886" watchObservedRunningTime="2026-01-03 04:18:34.610996631 +0000 UTC m=+141.728049816" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.633888 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-wvrlk" podStartSLOduration=121.633869682 podStartE2EDuration="2m1.633869682s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.631198901 +0000 UTC m=+141.748252086" watchObservedRunningTime="2026-01-03 04:18:34.633869682 +0000 UTC m=+141.750922857" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.667465 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.667848 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.16783385 +0000 UTC m=+142.284887035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.737366 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zjjj8" podStartSLOduration=121.737347968 podStartE2EDuration="2m1.737347968s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.73181342 +0000 UTC m=+141.848866595" watchObservedRunningTime="2026-01-03 04:18:34.737347968 +0000 UTC m=+141.854401153" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.755140 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.762750 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.769630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.770103 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.270087292 +0000 UTC m=+142.387140477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.772129 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cgxlq" podStartSLOduration=121.772111127 podStartE2EDuration="2m1.772111127s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:34.769694572 +0000 UTC m=+141.886747757" watchObservedRunningTime="2026-01-03 04:18:34.772111127 +0000 UTC m=+141.889164302" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.791109 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zck7q"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.795321 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrzvr"] Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.820324 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.871252 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.871642 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.371626827 +0000 UTC m=+142.488680002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:34 crc kubenswrapper[4865]: I0103 04:18:34.972597 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:34 crc kubenswrapper[4865]: E0103 04:18:34.972948 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.472936554 +0000 UTC m=+142.589989739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.048572 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-f5f7l" podStartSLOduration=6.048549915 podStartE2EDuration="6.048549915s" podCreationTimestamp="2026-01-03 04:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.046776918 +0000 UTC m=+142.163830153" watchObservedRunningTime="2026-01-03 04:18:35.048549915 +0000 UTC m=+142.165603120" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.073645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.073891 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.573857131 +0000 UTC m=+142.690910356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.074142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.074663 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.574646033 +0000 UTC m=+142.691699248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.145427 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:35 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:35 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:35 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.145903 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:35 crc kubenswrapper[4865]: W0103 04:18:35.153702 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25320d8e_60c1_4c82_b281_558f7db73000.slice/crio-c4eb956c0bec7c975a04583c8f077a07d60e64d116ff12b366096dc9e8fd6924 WatchSource:0}: Error finding container c4eb956c0bec7c975a04583c8f077a07d60e64d116ff12b366096dc9e8fd6924: Status 404 returned error can't find the container with id c4eb956c0bec7c975a04583c8f077a07d60e64d116ff12b366096dc9e8fd6924 Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.174930 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.175375 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.675361375 +0000 UTC m=+142.792414550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.267289 4865 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sqb86 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.267358 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.276073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.276478 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.776462617 +0000 UTC m=+142.893515822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.279606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" event={"ID":"c2f1a3e5-c54e-40ae-8640-e7b964e2669b","Type":"ContainerStarted","Data":"549e63dcdf8f6f136a319a1579e8772c1b3bbf3018e850e60739965a1269f53e"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.280725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zck7q" event={"ID":"064b364b-0a0b-4be8-98bf-880b71ff717e","Type":"ContainerStarted","Data":"9c0b82b27829d72de83dd423b68bf80441105e5a6b82c56573155514449bfd62"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.281470 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" event={"ID":"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff","Type":"ContainerStarted","Data":"bdf724ff56608ee86121e0c8cfd0210d356d5d8e7f37ac73cd3b02e8bbf8b9fd"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.282153 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" event={"ID":"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0","Type":"ContainerStarted","Data":"2c5cc80961133edd3249943565395251ca9cfc683bb91347b0f6651c644672e7"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.282905 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" event={"ID":"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936","Type":"ContainerStarted","Data":"8278c64675fe94d6bad3eb6aa9d7208d56a7828d5dc01a8939c9eff87571c4a3"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.283526 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" event={"ID":"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60","Type":"ContainerStarted","Data":"37c9de788fdfaad6282dcaf4eb96c7af870e2e91d8c677fbc6880f177d15e347"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.286283 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" event={"ID":"79d565a7-6d27-46e7-83a2-49034797be22","Type":"ContainerStarted","Data":"e9b06a5dcf7c900ab6ed27b7de2ccc7d79ce56a70cdbb11aed843384bd7afc41"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.287599 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" event={"ID":"7494a6a5-3c0d-43c8-bfca-00e55103eae5","Type":"ContainerStarted","Data":"5949f841c34848dcb991067558cc2f5ce70037c2fd782ad86086af9841dde552"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.292413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" event={"ID":"75bb18cc-0cb4-4018-9d99-111bbfecb29a","Type":"ContainerStarted","Data":"980a508a5dbb933f8a170aeef2ef3bb53d95239bbd09ef603a0558a9c7dd8caf"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.293319 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" event={"ID":"af7bc8fa-5059-413a-b03b-8a95d39f786c","Type":"ContainerStarted","Data":"0364b4b39ce0e307c4faaa34ba24394260943b916a79a9d163169ec3755bbae7"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.295214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" event={"ID":"f9e020ef-8a7a-44b9-8ea3-f4dd49b1a5d5","Type":"ContainerStarted","Data":"74cf0ae2b082e5ce4556bdbc7732b17066e3798574c358e95a36801578ec4417"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.296528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" event={"ID":"bf0fe541-6d83-4c12-b88f-adca678e9ed4","Type":"ContainerStarted","Data":"9073a17d3acf5b5e6c10783ea8338142364e06607d488fe6e4c764a12a05a02d"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.297334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" event={"ID":"6611402f-7bf5-4147-a8ef-d96f052cd572","Type":"ContainerStarted","Data":"a606ca301ac1de9c0da741c601c996a97826192f5742d4b2ed4178563d90190e"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.298083 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" event={"ID":"25320d8e-60c1-4c82-b281-558f7db73000","Type":"ContainerStarted","Data":"c4eb956c0bec7c975a04583c8f077a07d60e64d116ff12b366096dc9e8fd6924"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.299166 4865 generic.go:334] "Generic (PLEG): container finished" podID="35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb" containerID="4dd51ee7397fc3107be0d03ce39c613ddf234d617c1088ed546af092b091f08e" exitCode=0 Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.300132 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" event={"ID":"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb","Type":"ContainerDied","Data":"4dd51ee7397fc3107be0d03ce39c613ddf234d617c1088ed546af092b091f08e"} Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.301003 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-zjjj8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.301042 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zjjj8" podUID="49ce7309-7ce7-4325-be8d-fbf7f19b1fcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.377128 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.377415 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.877390884 +0000 UTC m=+142.994444069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.378011 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.380396 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.880364243 +0000 UTC m=+142.997417428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.472555 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78b64" podStartSLOduration=121.472535887 podStartE2EDuration="2m1.472535887s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.191719562 +0000 UTC m=+142.308772757" watchObservedRunningTime="2026-01-03 04:18:35.472535887 +0000 UTC m=+142.589589072" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.475347 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.485205 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q9btt"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.485255 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.485964 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.486292 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:35.986275844 +0000 UTC m=+143.103329029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.486317 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.531847 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.538916 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w8m6t"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.545627 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.548234 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwzdz"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.558699 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.562503 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.562709 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.584617 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp"] Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.585076 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" podStartSLOduration=122.585061345 podStartE2EDuration="2m2.585061345s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.511232812 +0000 UTC m=+142.628285997" watchObservedRunningTime="2026-01-03 04:18:35.585061345 +0000 UTC m=+142.702114530" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.587048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.587415 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.087378756 +0000 UTC m=+143.204431941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.593343 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" podStartSLOduration=122.593322415 podStartE2EDuration="2m2.593322415s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.553302456 +0000 UTC m=+142.670355661" watchObservedRunningTime="2026-01-03 04:18:35.593322415 +0000 UTC m=+142.710375600" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.596338 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hjzq4" podStartSLOduration=122.596328226 podStartE2EDuration="2m2.596328226s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.571125703 +0000 UTC m=+142.688178898" watchObservedRunningTime="2026-01-03 04:18:35.596328226 +0000 UTC m=+142.713381401" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.614068 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.619219 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d28sx" podStartSLOduration=122.619201987 podStartE2EDuration="2m2.619201987s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.586625077 +0000 UTC m=+142.703678272" watchObservedRunningTime="2026-01-03 04:18:35.619201987 +0000 UTC m=+142.736255172" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.648396 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-96l9g" podStartSLOduration=122.648370607 podStartE2EDuration="2m2.648370607s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:35.646413024 +0000 UTC m=+142.763466209" watchObservedRunningTime="2026-01-03 04:18:35.648370607 +0000 UTC m=+142.765423792" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.689254 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.689621 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.189603218 +0000 UTC m=+143.306656403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.792491 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.793148 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.293135836 +0000 UTC m=+143.410189021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.824234 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:35 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:35 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:35 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.824287 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:35 crc kubenswrapper[4865]: I0103 04:18:35.899855 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:35 crc kubenswrapper[4865]: E0103 04:18:35.900471 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.400455464 +0000 UTC m=+143.517508649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.003881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.004170 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.504159186 +0000 UTC m=+143.621212361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.104322 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.104770 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.604754275 +0000 UTC m=+143.721807460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.207148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.207702 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.707689265 +0000 UTC m=+143.824742450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.309327 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.309697 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.809682291 +0000 UTC m=+143.926735476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.321911 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zck7q" event={"ID":"064b364b-0a0b-4be8-98bf-880b71ff717e","Type":"ContainerStarted","Data":"94000a73107c58db63db5fd125e5f77204e15926d1774258ea51b597366cc274"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.352698 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" event={"ID":"472b09fa-6397-442e-bd28-40d3dc0aff44","Type":"ContainerStarted","Data":"21853da4b5cc5eb805350077a4a3d4e970629e3983e753b97f3ecd736adb7cc7"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.352724 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zck7q" podStartSLOduration=7.352714702 podStartE2EDuration="7.352714702s" podCreationTimestamp="2026-01-03 04:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.351680094 +0000 UTC m=+143.468733279" watchObservedRunningTime="2026-01-03 04:18:36.352714702 +0000 UTC m=+143.469767887" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.365674 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" event={"ID":"7f699bc8-e6df-4a4d-b0a6-f1a50417a2b0","Type":"ContainerStarted","Data":"af1b2715a2918c350cd07df900401d1891cc86be01d89c914dd1bbe860e54edc"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.381702 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" event={"ID":"14816cfb-cafa-40c7-96f1-b87d310e9264","Type":"ContainerStarted","Data":"9af9b35d51a2356908f9d7dce5d7d1efdc7adf9eca932b7caa0590817b4842aa"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.411301 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.411582 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:36.911570915 +0000 UTC m=+144.028624100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.444035 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmh9q" podStartSLOduration=122.444020712 podStartE2EDuration="2m2.444020712s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.441235747 +0000 UTC m=+143.558288932" watchObservedRunningTime="2026-01-03 04:18:36.444020712 +0000 UTC m=+143.561073897" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.444667 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" event={"ID":"ad0b2803-dc7b-4687-b446-39f1e8645d4a","Type":"ContainerStarted","Data":"5a7891c500028491ee3aef07dc21e2d2c1118cc70d48747698d0fcbbf24eaf47"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.500877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" event={"ID":"725589e5-e586-4ffc-b0f9-e58c09aa64e6","Type":"ContainerStarted","Data":"1a56e90578a7c08490eb645df87b61eacf732139a93f762a1e303ee9d713fe9f"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.513824 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.514038 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.014024353 +0000 UTC m=+144.131077528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.514190 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.514431 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.014425053 +0000 UTC m=+144.131478238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.538590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" event={"ID":"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4","Type":"ContainerStarted","Data":"5e22d683b46822f3e4e9c618d820c2af8d8cd9c52ceafae50d98c3939468c4ee"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.599671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" event={"ID":"25320d8e-60c1-4c82-b281-558f7db73000","Type":"ContainerStarted","Data":"c0ab4055defa6c3ccb2bf8503176694ccb16f9309cd8f6704413f4c76232d2fa"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.615493 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.615876 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.115860515 +0000 UTC m=+144.232913700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.632991 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" event={"ID":"af7bc8fa-5059-413a-b03b-8a95d39f786c","Type":"ContainerStarted","Data":"ddc9648018ccb7facb6057f4023e9cddc43ef68aba6c751b19ab85d551cc19af"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.638983 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" event={"ID":"8c095fb6-dc75-4da1-9361-b1601d6130ac","Type":"ContainerStarted","Data":"24a6c5e60463c5965f070e7cbc30e3188dd58f3947949a4b6bac2067b522021d"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.643755 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ll6mp" podStartSLOduration=123.6437387 podStartE2EDuration="2m3.6437387s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.643547365 +0000 UTC m=+143.760600550" watchObservedRunningTime="2026-01-03 04:18:36.6437387 +0000 UTC m=+143.760791875" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.651606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" event={"ID":"6611402f-7bf5-4147-a8ef-d96f052cd572","Type":"ContainerStarted","Data":"7c9c8beb06ba9db7ebde438fa75e0c95563a3a7a9a70e5977128d2ba39d8ee35"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.655782 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerStarted","Data":"3582e3e8832df54132bc2fccee52a77d77978eae6c090d7ef064eca1e20e8cac"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.656423 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.671958 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" event={"ID":"8e1fb9b3-cd30-4419-9872-ec29cccb5957","Type":"ContainerStarted","Data":"e6347d0f3ab514c915cee8ff88f35b43d819bb195e9a1e2b42da588ea66a7389"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.680579 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8cgc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.680647 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.700877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" event={"ID":"79d565a7-6d27-46e7-83a2-49034797be22","Type":"ContainerStarted","Data":"92bb559515ed90e3a8c4b32742e0e7cd74e4b48ddf7ea623cc14e85ece0e802f"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.720670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" event={"ID":"29b1ddd4-b577-45ed-b89e-6dfda5975433","Type":"ContainerStarted","Data":"a506ee5b94462f508d317595327d4d98062fbced1803e213b8d05d0e3c5e6f5b"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.721454 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.723122 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.223111191 +0000 UTC m=+144.340164376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.732115 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lclwv" podStartSLOduration=123.732096311 podStartE2EDuration="2m3.732096311s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.681079057 +0000 UTC m=+143.798132242" watchObservedRunningTime="2026-01-03 04:18:36.732096311 +0000 UTC m=+143.849149496" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.733572 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" podStartSLOduration=122.733565901 podStartE2EDuration="2m2.733565901s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.731737502 +0000 UTC m=+143.848790697" watchObservedRunningTime="2026-01-03 04:18:36.733565901 +0000 UTC m=+143.850619076" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.746012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" event={"ID":"06d641b8-c3b3-4817-aff8-b0c24d75df64","Type":"ContainerStarted","Data":"48cef298839d240587ced850795c39a43307343d34f5a157d619f3ea4e17b253"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.775303 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fp86k" podStartSLOduration=123.775285116 podStartE2EDuration="2m3.775285116s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.773163029 +0000 UTC m=+143.890216214" watchObservedRunningTime="2026-01-03 04:18:36.775285116 +0000 UTC m=+143.892338301" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.802111 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" event={"ID":"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff","Type":"ContainerStarted","Data":"0a8f9ec1bf04750862822cf8691678ff43505656b2340204319befd07127c175"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.818126 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" event={"ID":"61dddc61-7785-4656-833d-21b330e2910b","Type":"ContainerStarted","Data":"d60a050f4c04c8f9a33beccf2ecc5d9cc47ff68bae2f71809e3bd3ae1e6de5f3"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.818170 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" event={"ID":"61dddc61-7785-4656-833d-21b330e2910b","Type":"ContainerStarted","Data":"8b1b0c2f31fc2be9eaff509f5a73099c939de9acf6bb0b7ef2958f02f8a06825"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.819327 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8m6t" event={"ID":"a8e6d994-0d40-413e-a9b1-26d5fb968747","Type":"ContainerStarted","Data":"7b92e9ea727f36832c536b651f2df637e068eade3a42184508ba79b3ffdb8424"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.822849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.823884 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:36 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:36 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:36 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.823920 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.824148 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.324132171 +0000 UTC m=+144.441185356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.824840 4865 generic.go:334] "Generic (PLEG): container finished" podID="3fbd343a-070a-4b55-b3f9-37114883bbbb" containerID="ed46ae2c6a4aa1e26f74552f2054e96a391dbd0d8030e30c852775ac6ccf8b43" exitCode=0 Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.824896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" event={"ID":"3fbd343a-070a-4b55-b3f9-37114883bbbb","Type":"ContainerDied","Data":"ed46ae2c6a4aa1e26f74552f2054e96a391dbd0d8030e30c852775ac6ccf8b43"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.829294 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" event={"ID":"75bb18cc-0cb4-4018-9d99-111bbfecb29a","Type":"ContainerStarted","Data":"20111dab28d14ef1cf21e818fa41954686abf5da772d0c861321a1c69106c544"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.836947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" event={"ID":"b05c89ba-dd9d-4272-8f12-b0edd96985bb","Type":"ContainerStarted","Data":"9c98590cfe0cc15f67ffc906db27dd681595baf0564b10e2183b0f2f9d80b043"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.841127 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" podStartSLOduration=122.841113595 podStartE2EDuration="2m2.841113595s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.839868592 +0000 UTC m=+143.956921767" watchObservedRunningTime="2026-01-03 04:18:36.841113595 +0000 UTC m=+143.958166780" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.842294 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" event={"ID":"c2f1a3e5-c54e-40ae-8640-e7b964e2669b","Type":"ContainerStarted","Data":"891ad06cfc90619f022f6e7c31234770c877a33b12f226481d0ca508f5772813"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.844549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" event={"ID":"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60","Type":"ContainerStarted","Data":"27e952f3dbe4bd6a66b814eb48de750a9be61782767099ed8b6f3dcf6fb0483f"} Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.903608 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.903762 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:36 crc kubenswrapper[4865]: I0103 04:18:36.924350 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:36 crc kubenswrapper[4865]: E0103 04:18:36.931722 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.431695915 +0000 UTC m=+144.548749100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.024449 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" podStartSLOduration=123.024434145 podStartE2EDuration="2m3.024434145s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:36.968674544 +0000 UTC m=+144.085727739" watchObservedRunningTime="2026-01-03 04:18:37.024434145 +0000 UTC m=+144.141487330" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.025689 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.026032 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.526015617 +0000 UTC m=+144.643068792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.101773 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rnlzq" podStartSLOduration=124.101759881 podStartE2EDuration="2m4.101759881s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:37.100680202 +0000 UTC m=+144.217733397" watchObservedRunningTime="2026-01-03 04:18:37.101759881 +0000 UTC m=+144.218813066" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.101883 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xtz6v" podStartSLOduration=124.101878384 podStartE2EDuration="2m4.101878384s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:37.029922311 +0000 UTC m=+144.146975496" watchObservedRunningTime="2026-01-03 04:18:37.101878384 +0000 UTC m=+144.218931569" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.126828 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.127161 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.62714564 +0000 UTC m=+144.744198825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.138765 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2v6xt" podStartSLOduration=124.13875284 podStartE2EDuration="2m4.13875284s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:37.136866949 +0000 UTC m=+144.253920134" watchObservedRunningTime="2026-01-03 04:18:37.13875284 +0000 UTC m=+144.255806025" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.227411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.227759 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.727743108 +0000 UTC m=+144.844796293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.318608 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.328236 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.328594 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.828582534 +0000 UTC m=+144.945635719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.355328 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" podStartSLOduration=124.355311038 podStartE2EDuration="2m4.355311038s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:37.18067614 +0000 UTC m=+144.297729325" watchObservedRunningTime="2026-01-03 04:18:37.355311038 +0000 UTC m=+144.472364223" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.429886 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.430180 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:37.930158198 +0000 UTC m=+145.047211383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.531054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.531464 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.031446396 +0000 UTC m=+145.148499581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.632487 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.632666 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.13263958 +0000 UTC m=+145.249692765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.632741 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.633001 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.132987589 +0000 UTC m=+145.250040774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.734246 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.734346 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.234329788 +0000 UTC m=+145.351382963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.734617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.734886 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.234877323 +0000 UTC m=+145.351930508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.827887 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:37 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:37 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:37 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.827956 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.836465 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.836689 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.336653833 +0000 UTC m=+145.453707028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.836794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.837184 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.337167606 +0000 UTC m=+145.454220871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.849414 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" event={"ID":"472b09fa-6397-442e-bd28-40d3dc0aff44","Type":"ContainerStarted","Data":"cc150b4c1328063835a067bff6db6097c1e9f11e02a875b648b648636765de8a"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.851336 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" event={"ID":"af7bc8fa-5059-413a-b03b-8a95d39f786c","Type":"ContainerStarted","Data":"72b67bda1eeb15ac980caad2c8ce2fd574c255bb596173a329382f7d2056d2da"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.852585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" event={"ID":"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936","Type":"ContainerStarted","Data":"95af12bfecaf4f8c1b52420fc5945421e7eaae34b87526155d367973a52c68a5"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.853540 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" event={"ID":"725589e5-e586-4ffc-b0f9-e58c09aa64e6","Type":"ContainerStarted","Data":"acd4958de22d7872a341807eccf4eceda60b164917896d45207af5f5643b1bc5"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.854968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" event={"ID":"29b1ddd4-b577-45ed-b89e-6dfda5975433","Type":"ContainerStarted","Data":"f9074e49833e17aa04df1c6048b37bd00c37cc6e9a1832fe2d4d8eddaaa1bc25"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.854993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" event={"ID":"29b1ddd4-b577-45ed-b89e-6dfda5975433","Type":"ContainerStarted","Data":"517bcb7d64a927a894780e52b1423d13ed56bb82fdbaa4dd0489e42f7627dd1f"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.856601 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" event={"ID":"3fbd343a-070a-4b55-b3f9-37114883bbbb","Type":"ContainerStarted","Data":"7ac02180929b9048a15ef081ff3ee88c924d3bc0e581bc69500f1c180535a430"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.856865 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.857745 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" event={"ID":"14816cfb-cafa-40c7-96f1-b87d310e9264","Type":"ContainerStarted","Data":"d85f2f613bc5b6088d8da91d5eb33f0ad2f2fc5f566d5a843f1a729621f0e2d4"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.857890 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.858919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerStarted","Data":"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.859517 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8cgc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.859553 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.860817 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" event={"ID":"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb","Type":"ContainerStarted","Data":"0308d67299df6ddacf2c6beaccc6b9fe035d0ffd0c22108cfdbacf6b84ae1af8"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.860841 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" event={"ID":"35518fa6-7b4e-4a4d-bba6-f1b70a07cfcb","Type":"ContainerStarted","Data":"7fbad9db7fa53545a76746d4207c56d7c6cdc800e8dcb0962ebe563fa308e579"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.862061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8m6t" event={"ID":"a8e6d994-0d40-413e-a9b1-26d5fb968747","Type":"ContainerStarted","Data":"f41d692f45ac1eb9fdff7c469400696717897b0f284fbac887285df7680c6a3a"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.862099 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w8m6t" event={"ID":"a8e6d994-0d40-413e-a9b1-26d5fb968747","Type":"ContainerStarted","Data":"ed327dddf3fd892caad3ed9fc32fa1ff191fb16d3ceacf94211d6a099d24560e"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.862198 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.863297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" event={"ID":"06d641b8-c3b3-4817-aff8-b0c24d75df64","Type":"ContainerStarted","Data":"f575f7c50f741456d9a1fc7ca1cb1b6e9699c8225dd3f42f7f2a6bd826f8698d"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.863337 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" event={"ID":"06d641b8-c3b3-4817-aff8-b0c24d75df64","Type":"ContainerStarted","Data":"69d8df1cda6664f2f7f9c90d93f7308c02daf8d766cb4f7e00542b7d50e024a1"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.863684 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.864624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" event={"ID":"8abcadd4-2cc5-4337-9f6f-4e462d9e25e4","Type":"ContainerStarted","Data":"423d8907ea31bc67b8f9e180fd11f03e17ad745e994d586cfc98c2877e915989"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.865110 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.866036 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" event={"ID":"8c095fb6-dc75-4da1-9361-b1601d6130ac","Type":"ContainerStarted","Data":"7046db199be7a9047c2b201161adfcfe4fd0c0be177696124bff0932c258618d"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.866484 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.867566 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" event={"ID":"8e1fb9b3-cd30-4419-9872-ec29cccb5957","Type":"ContainerStarted","Data":"a9235461bc1bebc4ca82760ee1cc5c8911961252850822d1a76c7cbf777f3639"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.867600 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" event={"ID":"8e1fb9b3-cd30-4419-9872-ec29cccb5957","Type":"ContainerStarted","Data":"2fa7f8388326ac8ecf54639e8d631b0682b7ebfb706b05cd8afa718e73f9be62"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.868986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" event={"ID":"ad0b2803-dc7b-4687-b446-39f1e8645d4a","Type":"ContainerStarted","Data":"73d7294b46c536362cb563789439c0cd5e892cd963eb93e66747483d13a43c8b"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.869035 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" event={"ID":"ad0b2803-dc7b-4687-b446-39f1e8645d4a","Type":"ContainerStarted","Data":"1163c03b3e200694874613d88dfda6ff77a5fe2a240fd5b2c9451faa636705bd"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.870312 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8lxcr" event={"ID":"a9154ae2-7dc4-4ce8-bbda-11d33395e8ff","Type":"ContainerStarted","Data":"2db5a4742bb528386c2f88845927334e8ccc9c755bbed88c782d89725ad74f8e"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.871958 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" event={"ID":"61dddc61-7785-4656-833d-21b330e2910b","Type":"ContainerStarted","Data":"32548f5d3375c92baa9e2ae4401d87bde65fc62eef5090f593efc4ab1c118c40"} Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.880880 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q4w96" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.899460 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.938225 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.938502 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.438479774 +0000 UTC m=+145.555532959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.939148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:37 crc kubenswrapper[4865]: E0103 04:18:37.946077 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.446059997 +0000 UTC m=+145.563113182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:37 crc kubenswrapper[4865]: I0103 04:18:37.950762 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.040310 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.040532 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.540502531 +0000 UTC m=+145.657555716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.040618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.040959 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.540942473 +0000 UTC m=+145.657995658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.095996 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mftcp" podStartSLOduration=124.095976504 podStartE2EDuration="2m4.095976504s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.001959331 +0000 UTC m=+145.119012516" watchObservedRunningTime="2026-01-03 04:18:38.095976504 +0000 UTC m=+145.213029689" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.141696 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.141815 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.641795589 +0000 UTC m=+145.758848774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.142058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.142479 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.642468276 +0000 UTC m=+145.759521461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.180346 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" podStartSLOduration=125.180328738 podStartE2EDuration="2m5.180328738s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.098417099 +0000 UTC m=+145.215470284" watchObservedRunningTime="2026-01-03 04:18:38.180328738 +0000 UTC m=+145.297381923" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.208373 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q9btt" podStartSLOduration=124.208353587 podStartE2EDuration="2m4.208353587s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.18150516 +0000 UTC m=+145.298558355" watchObservedRunningTime="2026-01-03 04:18:38.208353587 +0000 UTC m=+145.325406772" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.208727 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pwzdz" podStartSLOduration=124.208721867 podStartE2EDuration="2m4.208721867s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.206821796 +0000 UTC m=+145.323874981" watchObservedRunningTime="2026-01-03 04:18:38.208721867 +0000 UTC m=+145.325775052" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.243473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.243623 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.743598149 +0000 UTC m=+145.860651334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.244059 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.244360 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.744350099 +0000 UTC m=+145.861403344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.280636 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qb4sc" podStartSLOduration=124.280616639 podStartE2EDuration="2m4.280616639s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.249742934 +0000 UTC m=+145.366796119" watchObservedRunningTime="2026-01-03 04:18:38.280616639 +0000 UTC m=+145.397669824" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.281228 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" podStartSLOduration=125.281224994 podStartE2EDuration="2m5.281224994s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.275395789 +0000 UTC m=+145.392448974" watchObservedRunningTime="2026-01-03 04:18:38.281224994 +0000 UTC m=+145.398278179" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.330192 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" podStartSLOduration=124.330173803 podStartE2EDuration="2m4.330173803s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.325776515 +0000 UTC m=+145.442829700" watchObservedRunningTime="2026-01-03 04:18:38.330173803 +0000 UTC m=+145.447226988" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.345786 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.345989 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.845960434 +0000 UTC m=+145.963013609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.346058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.346349 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.846342275 +0000 UTC m=+145.963395460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.408419 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w8m6t" podStartSLOduration=9.408366633 podStartE2EDuration="9.408366633s" podCreationTimestamp="2026-01-03 04:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.406896814 +0000 UTC m=+145.523949999" watchObservedRunningTime="2026-01-03 04:18:38.408366633 +0000 UTC m=+145.525419818" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.446906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.447448 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:38.947432097 +0000 UTC m=+146.064485282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.461070 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5h97" podStartSLOduration=125.461058051 podStartE2EDuration="2m5.461058051s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.458944904 +0000 UTC m=+145.575998089" watchObservedRunningTime="2026-01-03 04:18:38.461058051 +0000 UTC m=+145.578111226" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.548650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.549202 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.049188617 +0000 UTC m=+146.166241802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.609876 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-s7cr9" podStartSLOduration=124.609855638 podStartE2EDuration="2m4.609855638s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.609115308 +0000 UTC m=+145.726168493" watchObservedRunningTime="2026-01-03 04:18:38.609855638 +0000 UTC m=+145.726908823" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.610511 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xjxbl" podStartSLOduration=124.610505595 podStartE2EDuration="2m4.610505595s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.531669708 +0000 UTC m=+145.648722893" watchObservedRunningTime="2026-01-03 04:18:38.610505595 +0000 UTC m=+145.727558780" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.625819 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zc4xp" podStartSLOduration=124.625801864 podStartE2EDuration="2m4.625801864s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.622811024 +0000 UTC m=+145.739864209" watchObservedRunningTime="2026-01-03 04:18:38.625801864 +0000 UTC m=+145.742855049" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.649885 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.650019 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.150000411 +0000 UTC m=+146.267053596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.650138 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.650466 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.150455713 +0000 UTC m=+146.267508898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.718940 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" podStartSLOduration=124.718924443 podStartE2EDuration="2m4.718924443s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.716356004 +0000 UTC m=+145.833409189" watchObservedRunningTime="2026-01-03 04:18:38.718924443 +0000 UTC m=+145.835977628" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.719182 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cmfx6" podStartSLOduration=124.71917658 podStartE2EDuration="2m4.71917658s" podCreationTimestamp="2026-01-03 04:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:38.6813951 +0000 UTC m=+145.798448285" watchObservedRunningTime="2026-01-03 04:18:38.71917658 +0000 UTC m=+145.836229765" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.751188 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.751610 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.251575986 +0000 UTC m=+146.368629171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.825005 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:38 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:38 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:38 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.825059 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.852265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.852588 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.352576885 +0000 UTC m=+146.469630070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.859033 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mfpfj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.859091 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" podUID="14816cfb-cafa-40c7-96f1-b87d310e9264" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.878098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" event={"ID":"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936","Type":"ContainerStarted","Data":"b8497108bfb8d02379bc3a6aa00457e281c17d68cb6c40c96bbeb3fa71886d71"} Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.878567 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8cgc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.878601 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.953795 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.953998 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.453964986 +0000 UTC m=+146.571018171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:38 crc kubenswrapper[4865]: I0103 04:18:38.954472 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:38 crc kubenswrapper[4865]: E0103 04:18:38.956420 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.45640368 +0000 UTC m=+146.573456865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.055847 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.056056 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.556026064 +0000 UTC m=+146.673079249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.157298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.157712 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.65769555 +0000 UTC m=+146.774748735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.258160 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.258369 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.75834348 +0000 UTC m=+146.875396655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.258455 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.258799 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.758790702 +0000 UTC m=+146.875843887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.359464 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.359799 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.859780781 +0000 UTC m=+146.976833976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.460750 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.461035 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:39.961023828 +0000 UTC m=+147.078077013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.510361 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.511608 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.515079 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.526661 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.562133 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.562674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.562719 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.562771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55d2j\" (UniqueName: \"kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.562927 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.062911871 +0000 UTC m=+147.179965056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.582572 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mfpfj" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.664800 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.664909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.664945 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.664993 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55d2j\" (UniqueName: \"kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.665859 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.165840311 +0000 UTC m=+147.282893496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.666100 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.670722 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.685765 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55d2j\" (UniqueName: \"kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j\") pod \"certified-operators-fq86b\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.705310 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.706143 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.709321 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.718668 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.746347 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.747018 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.754994 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.755224 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.758363 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.763867 4865 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.765625 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.765812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.765852 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.765943 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdbc\" (UniqueName: \"kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.766044 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.26602968 +0000 UTC m=+147.383082855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.825507 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:39 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:39 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:39 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.825569 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.846114 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.866745 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdbc\" (UniqueName: \"kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.866795 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.866824 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.866854 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.866874 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.867087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.867530 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.367497061 +0000 UTC m=+147.484550246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.867693 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.867919 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.885631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdbc\" (UniqueName: \"kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc\") pod \"community-operators-s4c5k\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.893022 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.895245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.902872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.906859 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" event={"ID":"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936","Type":"ContainerStarted","Data":"2bbcb17acccf4a1091bdb85003b10650d88b2622c0737945bf2c64deb18648ba"} Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.906910 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" event={"ID":"886e1b37-a8e5-44bd-a8ba-ac2b09a5d936","Type":"ContainerStarted","Data":"304ccddc32010d268ffa4c955afefd596024943e5e9de410ef263063db8a5d1b"} Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.941542 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rrzvr" podStartSLOduration=10.94152511 podStartE2EDuration="10.94152511s" podCreationTimestamp="2026-01-03 04:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:39.940108032 +0000 UTC m=+147.057161217" watchObservedRunningTime="2026-01-03 04:18:39.94152511 +0000 UTC m=+147.058578295" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.973761 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.974215 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.974287 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.974339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.974474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.974533 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tzj\" (UniqueName: \"kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:39 crc kubenswrapper[4865]: I0103 04:18:39.975437 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:39 crc kubenswrapper[4865]: E0103 04:18:39.975517 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.475497808 +0000 UTC m=+147.592550993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.010776 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.039647 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.082520 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.086891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.086913 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.086942 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tzj\" (UniqueName: \"kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.086994 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.087303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: E0103 04:18:40.088110 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.588099678 +0000 UTC m=+147.705152863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.091505 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.091845 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.092311 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.136484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tzj\" (UniqueName: \"kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj\") pod \"certified-operators-ldq6h\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.156820 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.165994 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:18:40 crc kubenswrapper[4865]: W0103 04:18:40.187159 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6bbcfc2_cdf7_4d1d_b3c0_4d0598aba1c8.slice/crio-d278224a369678296388b0c89b2e81655971082342a21a119471dd182ae357d4 WatchSource:0}: Error finding container d278224a369678296388b0c89b2e81655971082342a21a119471dd182ae357d4: Status 404 returned error can't find the container with id d278224a369678296388b0c89b2e81655971082342a21a119471dd182ae357d4 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.187759 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.187972 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.188034 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.188077 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67qz\" (UniqueName: \"kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.188139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.188180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.188221 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: E0103 04:18:40.188364 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.688346266 +0000 UTC m=+147.805399451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.198780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.202246 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.207012 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.222669 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.289531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.289789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.289835 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d67qz\" (UniqueName: \"kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.289859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.289879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: E0103 04:18:40.290142 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.790131177 +0000 UTC m=+147.907184362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hmk4r" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.290807 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.291786 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.307655 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.312763 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d67qz\" (UniqueName: \"kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz\") pod \"community-operators-q6hjp\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.322867 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.374879 4865 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-03T04:18:39.763889982Z","Handler":null,"Name":""} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.389174 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.391358 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:40 crc kubenswrapper[4865]: E0103 04:18:40.391668 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 04:18:40.89165033 +0000 UTC m=+148.008703515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.400012 4865 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.400046 4865 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.414833 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.427242 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.445330 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.493272 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.496003 4865 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.496033 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.529692 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hmk4r\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.601961 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.644793 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.686728 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.739468 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.739515 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.750558 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:40 crc kubenswrapper[4865]: W0103 04:18:40.808236 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-823194435d7aab3f41846f0450c92acfe63e341c6f6809393f3a1919c75f0f06 WatchSource:0}: Error finding container 823194435d7aab3f41846f0450c92acfe63e341c6f6809393f3a1919c75f0f06: Status 404 returned error can't find the container with id 823194435d7aab3f41846f0450c92acfe63e341c6f6809393f3a1919c75f0f06 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.814205 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.827348 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:40 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:40 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:40 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.827698 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.845136 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:18:40 crc kubenswrapper[4865]: W0103 04:18:40.875694 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8580745f131a1a99918c79d9b33e67ab66bd8286f131072a22654af5ab99d432 WatchSource:0}: Error finding container 8580745f131a1a99918c79d9b33e67ab66bd8286f131072a22654af5ab99d432: Status 404 returned error can't find the container with id 8580745f131a1a99918c79d9b33e67ab66bd8286f131072a22654af5ab99d432 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.911681 4865 generic.go:334] "Generic (PLEG): container finished" podID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerID="11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0" exitCode=0 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.911825 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerDied","Data":"11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.911935 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerStarted","Data":"5df109944ab20c8ac45a4d97d1e7d134266a4a3ee51cf5c052993bb6e195ea42"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.922233 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.924546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"823194435d7aab3f41846f0450c92acfe63e341c6f6809393f3a1919c75f0f06"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.927877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72","Type":"ContainerStarted","Data":"110f4ea1f2d493af783433d97cb3eac7ad5274296628dda072282b873c541a28"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.929845 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8580745f131a1a99918c79d9b33e67ab66bd8286f131072a22654af5ab99d432"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.933662 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerStarted","Data":"32d7bced6f6c3265960e36c514c115d5c9df989ecf4c7799dbdd6716409c4701"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.949614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerStarted","Data":"272038690a3017bfdf40a20f4975a0fb3b3c8d711b49ffd38460f4a3fe503401"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.951137 4865 generic.go:334] "Generic (PLEG): container finished" podID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerID="5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2" exitCode=0 Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.951313 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerDied","Data":"5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.951424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerStarted","Data":"d278224a369678296388b0c89b2e81655971082342a21a119471dd182ae357d4"} Jan 03 04:18:40 crc kubenswrapper[4865]: I0103 04:18:40.956638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c82d12b3340f7a12a5c69b84e4825c8fa1cd089034fb444e8aa38848fc3bac47"} Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.014680 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:18:41 crc kubenswrapper[4865]: W0103 04:18:41.029833 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0447a4f_7f4f_41c7_912e_34b7e2b5e077.slice/crio-5817d51e6089387991609c632e48f8ecd18f1a5a22fd9c6d6b8c4c83722e2a16 WatchSource:0}: Error finding container 5817d51e6089387991609c632e48f8ecd18f1a5a22fd9c6d6b8c4c83722e2a16: Status 404 returned error can't find the container with id 5817d51e6089387991609c632e48f8ecd18f1a5a22fd9c6d6b8c4c83722e2a16 Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.162739 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.487643 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.488622 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.491200 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.506116 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.516990 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.517076 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9nrq\" (UniqueName: \"kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.517126 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.535428 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.618335 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9nrq\" (UniqueName: \"kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.618458 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.618528 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.618995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.619039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.637496 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9nrq\" (UniqueName: \"kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq\") pod \"redhat-marketplace-8vpwf\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.802134 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.826007 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:41 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:41 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:41 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.826083 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.892476 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.894733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.916749 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.921157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.921261 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfw6\" (UniqueName: \"kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.921314 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.977570 4865 generic.go:334] "Generic (PLEG): container finished" podID="18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" containerID="b7a11324e107177b61f1d9d0e79edd727774d4576d0bc47c19d5bd546fa8be3c" exitCode=0 Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.977656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72","Type":"ContainerDied","Data":"b7a11324e107177b61f1d9d0e79edd727774d4576d0bc47c19d5bd546fa8be3c"} Jan 03 04:18:41 crc kubenswrapper[4865]: I0103 04:18:41.981415 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01c642247e5a5633bb2ee822023920afc6d1056f1b92b74bcade08504ba0d524"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.000515 4865 generic.go:334] "Generic (PLEG): container finished" podID="ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" containerID="27e952f3dbe4bd6a66b814eb48de750a9be61782767099ed8b6f3dcf6fb0483f" exitCode=0 Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.000645 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" event={"ID":"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60","Type":"ContainerDied","Data":"27e952f3dbe4bd6a66b814eb48de750a9be61782767099ed8b6f3dcf6fb0483f"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.006053 4865 generic.go:334] "Generic (PLEG): container finished" podID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerID="cd1ceeaa11466171797f6b6b5558e49f492b7239d05443be191443de3bc5cb34" exitCode=0 Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.007159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerDied","Data":"cd1ceeaa11466171797f6b6b5558e49f492b7239d05443be191443de3bc5cb34"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.014534 4865 generic.go:334] "Generic (PLEG): container finished" podID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerID="eb6aa70d8b58bce085c00b241c22276bf71dad6fbcce40adc842f7ed0ed31424" exitCode=0 Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.014755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerDied","Data":"eb6aa70d8b58bce085c00b241c22276bf71dad6fbcce40adc842f7ed0ed31424"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.021771 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.025723 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"72b190b821678176d995b199ded6f9c0b208cb23c6df66abf5b775d3b169f291"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.026256 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.026420 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.026527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfw6\" (UniqueName: \"kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.027877 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.028753 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.031455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"09c2ec37a2358f99b66e7c955c855c59717616759b0644524d29356c980ac25c"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.031956 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.035093 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" event={"ID":"b0447a4f-7f4f-41c7-912e-34b7e2b5e077","Type":"ContainerStarted","Data":"6e7318c1c1c84204b324152d7e0478199298925ae93e89e76f27ae63d7d6ee98"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.035135 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" event={"ID":"b0447a4f-7f4f-41c7-912e-34b7e2b5e077","Type":"ContainerStarted","Data":"5817d51e6089387991609c632e48f8ecd18f1a5a22fd9c6d6b8c4c83722e2a16"} Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.035669 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.052107 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfw6\" (UniqueName: \"kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6\") pod \"redhat-marketplace-98z6f\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.119858 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" podStartSLOduration=129.11984181 podStartE2EDuration="2m9.11984181s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:18:42.11652224 +0000 UTC m=+149.233575425" watchObservedRunningTime="2026-01-03 04:18:42.11984181 +0000 UTC m=+149.236894995" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.266124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.307043 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-zjjj8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.307169 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-zjjj8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.307593 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zjjj8" podUID="49ce7309-7ce7-4325-be8d-fbf7f19b1fcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.307539 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zjjj8" podUID="49ce7309-7ce7-4325-be8d-fbf7f19b1fcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.320589 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.320629 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.322508 4865 patch_prober.go:28] interesting pod/console-f9d7485db-cgxlq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.322537 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cgxlq" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.432232 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.432275 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.440765 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.493038 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:18:42 crc kubenswrapper[4865]: W0103 04:18:42.497680 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b389eb7_6d2c_4e85_a3a1_88a517988670.slice/crio-df4fa9795240ca28b89b5d8d70b63de4e77f488afdb432cba592b761ce12845e WatchSource:0}: Error finding container df4fa9795240ca28b89b5d8d70b63de4e77f488afdb432cba592b761ce12845e: Status 404 returned error can't find the container with id df4fa9795240ca28b89b5d8d70b63de4e77f488afdb432cba592b761ce12845e Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.696481 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.698534 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.702008 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.711270 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.740004 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.740102 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.740168 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lsz9\" (UniqueName: \"kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.819954 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.822747 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:42 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 03 04:18:42 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:42 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.822779 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.845711 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.845832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lsz9\" (UniqueName: \"kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.855690 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.856196 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.856416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:42 crc kubenswrapper[4865]: I0103 04:18:42.875551 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lsz9\" (UniqueName: \"kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9\") pod \"redhat-operators-5zt55\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.045779 4865 generic.go:334] "Generic (PLEG): container finished" podID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerID="ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b" exitCode=0 Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.045865 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerDied","Data":"ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b"} Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.045892 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerStarted","Data":"e5a3f91154408afa1ef08b8e5765fcb2b3d61ac6b2e07396edc6191c10166db9"} Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.048545 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerID="45cad4386fd829a2ceccbe5f01ce615b754a58fc5b356f0e9ffc6f570c1f09ac" exitCode=0 Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.049474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerDied","Data":"45cad4386fd829a2ceccbe5f01ce615b754a58fc5b356f0e9ffc6f570c1f09ac"} Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.049502 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerStarted","Data":"df4fa9795240ca28b89b5d8d70b63de4e77f488afdb432cba592b761ce12845e"} Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.056323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fn8hc" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.060444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.068272 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.088410 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.089590 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.103261 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.163409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kwz\" (UniqueName: \"kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.163504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.163571 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.268964 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kwz\" (UniqueName: \"kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.269232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.269276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.269690 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.270125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.285809 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7kwz\" (UniqueName: \"kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz\") pod \"redhat-operators-lmd79\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.378523 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.450330 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.468681 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482202 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8r67\" (UniqueName: \"kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67\") pod \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482303 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume\") pod \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482347 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access\") pod \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482427 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir\") pod \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\" (UID: \"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72\") " Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume\") pod \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\" (UID: \"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60\") " Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.482975 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" (UID: "18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.483356 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" (UID: "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.488282 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" (UID: "18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.489172 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" (UID: "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.491039 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67" (OuterVolumeSpecName: "kube-api-access-l8r67") pod "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" (UID: "ed6f7e21-c705-44d9-9092-0f2ed4a7cf60"). InnerVolumeSpecName "kube-api-access-l8r67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.584531 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8r67\" (UniqueName: \"kubernetes.io/projected/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-kube-api-access-l8r67\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.584746 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.584755 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.584763 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.584771 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.634554 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.838490 4865 patch_prober.go:28] interesting pod/router-default-5444994796-hjzq4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 04:18:43 crc kubenswrapper[4865]: [+]has-synced ok Jan 03 04:18:43 crc kubenswrapper[4865]: [+]process-running ok Jan 03 04:18:43 crc kubenswrapper[4865]: healthz check failed Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.838543 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hjzq4" podUID="5a4248d3-b647-41fa-9d18-b0ca99fd4cbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 04:18:43 crc kubenswrapper[4865]: I0103 04:18:43.919843 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.068875 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerStarted","Data":"ccbbed7951c78a8a0b50bf80a4194000776010b6d5c6ef21c5ab0ea7fc3577ef"} Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.073048 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" event={"ID":"ed6f7e21-c705-44d9-9092-0f2ed4a7cf60","Type":"ContainerDied","Data":"37c9de788fdfaad6282dcaf4eb96c7af870e2e91d8c677fbc6880f177d15e347"} Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.073087 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c9de788fdfaad6282dcaf4eb96c7af870e2e91d8c677fbc6880f177d15e347" Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.073067 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g" Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.074626 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerID="1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b" exitCode=0 Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.074681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerDied","Data":"1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b"} Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.074705 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerStarted","Data":"5cf8fcec4b334eb36828020f1d33f00e5166f8d8b025207cfb31d3920d10e449"} Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.101687 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.103692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72","Type":"ContainerDied","Data":"110f4ea1f2d493af783433d97cb3eac7ad5274296628dda072282b873c541a28"} Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.103733 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110f4ea1f2d493af783433d97cb3eac7ad5274296628dda072282b873c541a28" Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.824535 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:44 crc kubenswrapper[4865]: I0103 04:18:44.828698 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hjzq4" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.114214 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0efa00a-22e7-456b-bf83-862b22519818" containerID="4ed7d238316124b1e75ebf5fa03f2c731783c0012b7b1887fd7ec4fed24cf546" exitCode=0 Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.114440 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerDied","Data":"4ed7d238316124b1e75ebf5fa03f2c731783c0012b7b1887fd7ec4fed24cf546"} Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.354473 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 04:18:45 crc kubenswrapper[4865]: E0103 04:18:45.355068 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" containerName="pruner" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.355080 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" containerName="pruner" Jan 03 04:18:45 crc kubenswrapper[4865]: E0103 04:18:45.355095 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" containerName="collect-profiles" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.355101 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" containerName="collect-profiles" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.355220 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" containerName="collect-profiles" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.355235 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b2f5b2-4136-4ed0-bcbe-5ad0ae771d72" containerName="pruner" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.355726 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.358635 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.358945 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.365845 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.421743 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.421814 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.525816 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.525957 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.526155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.551190 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:45 crc kubenswrapper[4865]: I0103 04:18:45.687150 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:46 crc kubenswrapper[4865]: I0103 04:18:46.225870 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 04:18:46 crc kubenswrapper[4865]: W0103 04:18:46.256747 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0f7aa49_fd74_4e08_bfac_4849ee163cca.slice/crio-e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6 WatchSource:0}: Error finding container e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6: Status 404 returned error can't find the container with id e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6 Jan 03 04:18:47 crc kubenswrapper[4865]: I0103 04:18:47.177244 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0f7aa49-fd74-4e08-bfac-4849ee163cca","Type":"ContainerStarted","Data":"e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6"} Jan 03 04:18:47 crc kubenswrapper[4865]: I0103 04:18:47.838564 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w8m6t" Jan 03 04:18:48 crc kubenswrapper[4865]: I0103 04:18:48.180237 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0f7aa49-fd74-4e08-bfac-4849ee163cca","Type":"ContainerStarted","Data":"90f88667ad7427a6c9f8ced660d80cd79eb6f89b4d00874dd460f6c519d51eb6"} Jan 03 04:18:49 crc kubenswrapper[4865]: I0103 04:18:49.282474 4865 generic.go:334] "Generic (PLEG): container finished" podID="b0f7aa49-fd74-4e08-bfac-4849ee163cca" containerID="90f88667ad7427a6c9f8ced660d80cd79eb6f89b4d00874dd460f6c519d51eb6" exitCode=0 Jan 03 04:18:49 crc kubenswrapper[4865]: I0103 04:18:49.282572 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0f7aa49-fd74-4e08-bfac-4849ee163cca","Type":"ContainerDied","Data":"90f88667ad7427a6c9f8ced660d80cd79eb6f89b4d00874dd460f6c519d51eb6"} Jan 03 04:18:52 crc kubenswrapper[4865]: I0103 04:18:52.312008 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zjjj8" Jan 03 04:18:52 crc kubenswrapper[4865]: I0103 04:18:52.403472 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:52 crc kubenswrapper[4865]: I0103 04:18:52.409281 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.425196 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.538227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir\") pod \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.538688 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access\") pod \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\" (UID: \"b0f7aa49-fd74-4e08-bfac-4849ee163cca\") " Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.538355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0f7aa49-fd74-4e08-bfac-4849ee163cca" (UID: "b0f7aa49-fd74-4e08-bfac-4849ee163cca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.538990 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.545456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0f7aa49-fd74-4e08-bfac-4849ee163cca" (UID: "b0f7aa49-fd74-4e08-bfac-4849ee163cca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:18:53 crc kubenswrapper[4865]: I0103 04:18:53.640664 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f7aa49-fd74-4e08-bfac-4849ee163cca-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:18:54 crc kubenswrapper[4865]: I0103 04:18:54.338005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0f7aa49-fd74-4e08-bfac-4849ee163cca","Type":"ContainerDied","Data":"e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6"} Jan 03 04:18:54 crc kubenswrapper[4865]: I0103 04:18:54.338051 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9bc6dc1f4602220ddbe1ad190332a068163bb5b365966aa888623a4967eb3d6" Jan 03 04:18:54 crc kubenswrapper[4865]: I0103 04:18:54.338128 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 04:18:55 crc kubenswrapper[4865]: I0103 04:18:55.871995 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:55 crc kubenswrapper[4865]: I0103 04:18:55.901484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/20f5ddd2-fabb-45db-83ad-9c45135ec710-metrics-certs\") pod \"network-metrics-daemon-wb9c7\" (UID: \"20f5ddd2-fabb-45db-83ad-9c45135ec710\") " pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:18:56 crc kubenswrapper[4865]: I0103 04:18:56.001681 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wb9c7" Jan 03 04:19:00 crc kubenswrapper[4865]: I0103 04:19:00.761899 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:19:10 crc kubenswrapper[4865]: I0103 04:19:10.740067 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:19:10 crc kubenswrapper[4865]: I0103 04:19:10.740466 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:19:12 crc kubenswrapper[4865]: I0103 04:19:12.739368 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ffh4h" Jan 03 04:19:19 crc kubenswrapper[4865]: E0103 04:19:19.444634 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 03 04:19:19 crc kubenswrapper[4865]: E0103 04:19:19.445436 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9nrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8vpwf_openshift-marketplace(a226c2c4-4ed7-4cfa-9fa2-b65151cef65b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:19 crc kubenswrapper[4865]: E0103 04:19:19.446672 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8vpwf" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" Jan 03 04:19:20 crc kubenswrapper[4865]: I0103 04:19:20.954097 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 04:19:21 crc kubenswrapper[4865]: E0103 04:19:21.195359 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8vpwf" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" Jan 03 04:19:21 crc kubenswrapper[4865]: E0103 04:19:21.328295 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3915742576/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 04:19:21 crc kubenswrapper[4865]: E0103 04:19:21.328476 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lsz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5zt55_openshift-marketplace(2d88ce56-9fd6-4b25-a5b5-8353d633ac48): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3915742576/2\": happened during read: context canceled" logger="UnhandledError" Jan 03 04:19:21 crc kubenswrapper[4865]: E0103 04:19:21.329773 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3915742576/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-5zt55" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" Jan 03 04:19:22 crc kubenswrapper[4865]: E0103 04:19:22.293018 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 03 04:19:22 crc kubenswrapper[4865]: E0103 04:19:22.293160 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwfw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-98z6f_openshift-marketplace(1b389eb7-6d2c-4e85-a3a1-88a517988670): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:22 crc kubenswrapper[4865]: E0103 04:19:22.294681 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-98z6f" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.137683 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 04:19:23 crc kubenswrapper[4865]: E0103 04:19:23.138191 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f7aa49-fd74-4e08-bfac-4849ee163cca" containerName="pruner" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.138207 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f7aa49-fd74-4e08-bfac-4849ee163cca" containerName="pruner" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.138333 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f7aa49-fd74-4e08-bfac-4849ee163cca" containerName="pruner" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.138759 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.144171 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.144443 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.204222 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.282338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.282651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.383958 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.384136 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.384279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.422857 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:23 crc kubenswrapper[4865]: E0103 04:19:23.468223 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 03 04:19:23 crc kubenswrapper[4865]: E0103 04:19:23.468418 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55d2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fq86b_openshift-marketplace(d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:23 crc kubenswrapper[4865]: E0103 04:19:23.469666 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fq86b" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" Jan 03 04:19:23 crc kubenswrapper[4865]: I0103 04:19:23.497850 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.091767 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-98z6f" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.091882 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fq86b" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.107721 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5zt55" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.171786 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.172075 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77tzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ldq6h_openshift-marketplace(201f9ab0-bcbf-4497-9a16-c33765209c84): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.173307 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ldq6h" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.189718 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.189846 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ncdbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s4c5k_openshift-marketplace(62322f6e-727e-4261-bdaa-9b7e91b8c1f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.191044 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s4c5k" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.199414 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.199535 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d67qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q6hjp_openshift-marketplace(6285a29b-28c2-4552-bb1b-3d31610858a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:24 crc kubenswrapper[4865]: E0103 04:19:24.200812 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q6hjp" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.333311 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.334765 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.351665 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.435336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.435411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.435519 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.537151 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.537264 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.537283 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.537434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.537497 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.560904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access\") pod \"installer-9-crc\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: I0103 04:19:27.680443 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.791988 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s4c5k" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.792161 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q6hjp" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.792214 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ldq6h" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.826705 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.826936 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7kwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lmd79_openshift-marketplace(e0efa00a-22e7-456b-bf83-862b22519818): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:19:27 crc kubenswrapper[4865]: E0103 04:19:27.828723 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lmd79" podUID="e0efa00a-22e7-456b-bf83-862b22519818" Jan 03 04:19:28 crc kubenswrapper[4865]: E0103 04:19:28.069613 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lmd79" podUID="e0efa00a-22e7-456b-bf83-862b22519818" Jan 03 04:19:28 crc kubenswrapper[4865]: I0103 04:19:28.150974 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 04:19:28 crc kubenswrapper[4865]: I0103 04:19:28.321963 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wb9c7"] Jan 03 04:19:28 crc kubenswrapper[4865]: I0103 04:19:28.325012 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.075367 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"077ab4a6-779a-4573-bb2f-9ada89c60a3f","Type":"ContainerStarted","Data":"17eaa539f0021eaed9d1e2b84c57c424a156ae5cbf0b87ec51c2f2d344d12de0"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.076166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"077ab4a6-779a-4573-bb2f-9ada89c60a3f","Type":"ContainerStarted","Data":"5bd32f1014fd6cbdcb2ae52556ca4745441118ffc92703f590156fce1701dbd1"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.077158 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" event={"ID":"20f5ddd2-fabb-45db-83ad-9c45135ec710","Type":"ContainerStarted","Data":"03931797bba595cb0c970b3bb3e85274b696f493c5970dd9bd41c8c6140945b3"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.077201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" event={"ID":"20f5ddd2-fabb-45db-83ad-9c45135ec710","Type":"ContainerStarted","Data":"7e2e46469cda17764fd808b82f5b588dec492a13de9a125f290d3b7858fdb97a"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.077219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wb9c7" event={"ID":"20f5ddd2-fabb-45db-83ad-9c45135ec710","Type":"ContainerStarted","Data":"3f310c0f6cdd80400bda2e95bf7aa510c5138fa885ceed428b807aa390300e07"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.078853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"792b802b-75ff-4a5b-94ae-13e863b98d7b","Type":"ContainerStarted","Data":"6d986c5b7c947113ece53380a409202c54344f51da6ba2417dff695edd20d7f4"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.078892 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"792b802b-75ff-4a5b-94ae-13e863b98d7b","Type":"ContainerStarted","Data":"810a96d6e2dfdde2c053de3a242b4f0fdb2ffbbce4e2a2ff656290fa08a52481"} Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.111640 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.111623469 podStartE2EDuration="2.111623469s" podCreationTimestamp="2026-01-03 04:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:19:29.095342287 +0000 UTC m=+196.212395472" watchObservedRunningTime="2026-01-03 04:19:29.111623469 +0000 UTC m=+196.228676654" Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.111915 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.111910637 podStartE2EDuration="6.111910637s" podCreationTimestamp="2026-01-03 04:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:19:29.10988685 +0000 UTC m=+196.226940035" watchObservedRunningTime="2026-01-03 04:19:29.111910637 +0000 UTC m=+196.228963822" Jan 03 04:19:29 crc kubenswrapper[4865]: I0103 04:19:29.139055 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wb9c7" podStartSLOduration=176.139027765 podStartE2EDuration="2m56.139027765s" podCreationTimestamp="2026-01-03 04:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:19:29.135378532 +0000 UTC m=+196.252431757" watchObservedRunningTime="2026-01-03 04:19:29.139027765 +0000 UTC m=+196.256080980" Jan 03 04:19:30 crc kubenswrapper[4865]: I0103 04:19:30.089542 4865 generic.go:334] "Generic (PLEG): container finished" podID="792b802b-75ff-4a5b-94ae-13e863b98d7b" containerID="6d986c5b7c947113ece53380a409202c54344f51da6ba2417dff695edd20d7f4" exitCode=0 Jan 03 04:19:30 crc kubenswrapper[4865]: I0103 04:19:30.089646 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"792b802b-75ff-4a5b-94ae-13e863b98d7b","Type":"ContainerDied","Data":"6d986c5b7c947113ece53380a409202c54344f51da6ba2417dff695edd20d7f4"} Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.354978 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.505530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access\") pod \"792b802b-75ff-4a5b-94ae-13e863b98d7b\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.505601 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir\") pod \"792b802b-75ff-4a5b-94ae-13e863b98d7b\" (UID: \"792b802b-75ff-4a5b-94ae-13e863b98d7b\") " Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.505796 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "792b802b-75ff-4a5b-94ae-13e863b98d7b" (UID: "792b802b-75ff-4a5b-94ae-13e863b98d7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.506190 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792b802b-75ff-4a5b-94ae-13e863b98d7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.510565 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "792b802b-75ff-4a5b-94ae-13e863b98d7b" (UID: "792b802b-75ff-4a5b-94ae-13e863b98d7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:19:31 crc kubenswrapper[4865]: I0103 04:19:31.607728 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792b802b-75ff-4a5b-94ae-13e863b98d7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:32 crc kubenswrapper[4865]: I0103 04:19:32.102533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"792b802b-75ff-4a5b-94ae-13e863b98d7b","Type":"ContainerDied","Data":"810a96d6e2dfdde2c053de3a242b4f0fdb2ffbbce4e2a2ff656290fa08a52481"} Jan 03 04:19:32 crc kubenswrapper[4865]: I0103 04:19:32.102896 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="810a96d6e2dfdde2c053de3a242b4f0fdb2ffbbce4e2a2ff656290fa08a52481" Jan 03 04:19:32 crc kubenswrapper[4865]: I0103 04:19:32.102621 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 04:19:36 crc kubenswrapper[4865]: I0103 04:19:36.126826 4865 generic.go:334] "Generic (PLEG): container finished" podID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerID="b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded" exitCode=0 Jan 03 04:19:36 crc kubenswrapper[4865]: I0103 04:19:36.126963 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerDied","Data":"b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded"} Jan 03 04:19:37 crc kubenswrapper[4865]: I0103 04:19:37.140811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerStarted","Data":"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238"} Jan 03 04:19:37 crc kubenswrapper[4865]: I0103 04:19:37.146970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerStarted","Data":"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc"} Jan 03 04:19:37 crc kubenswrapper[4865]: I0103 04:19:37.178914 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8vpwf" podStartSLOduration=2.24127304 podStartE2EDuration="56.178897047s" podCreationTimestamp="2026-01-03 04:18:41 +0000 UTC" firstStartedPulling="2026-01-03 04:18:43.048052228 +0000 UTC m=+150.165105413" lastFinishedPulling="2026-01-03 04:19:36.985676235 +0000 UTC m=+204.102729420" observedRunningTime="2026-01-03 04:19:37.178536336 +0000 UTC m=+204.295589551" watchObservedRunningTime="2026-01-03 04:19:37.178897047 +0000 UTC m=+204.295950232" Jan 03 04:19:38 crc kubenswrapper[4865]: I0103 04:19:38.154456 4865 generic.go:334] "Generic (PLEG): container finished" podID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerID="755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238" exitCode=0 Jan 03 04:19:38 crc kubenswrapper[4865]: I0103 04:19:38.154511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerDied","Data":"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.182841 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerStarted","Data":"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.193201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerStarted","Data":"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.195007 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerID="75be850328abf3ac7597845eadfc2c13e6e31c7ffab2954f4026b3513df01c24" exitCode=0 Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.195050 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerDied","Data":"75be850328abf3ac7597845eadfc2c13e6e31c7ffab2954f4026b3513df01c24"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.205115 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq86b" podStartSLOduration=3.357228205 podStartE2EDuration="1m1.205098478s" podCreationTimestamp="2026-01-03 04:18:39 +0000 UTC" firstStartedPulling="2026-01-03 04:18:40.972581597 +0000 UTC m=+148.089634782" lastFinishedPulling="2026-01-03 04:19:38.82045187 +0000 UTC m=+205.937505055" observedRunningTime="2026-01-03 04:19:40.204679326 +0000 UTC m=+207.321732511" watchObservedRunningTime="2026-01-03 04:19:40.205098478 +0000 UTC m=+207.322151663" Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.213304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerStarted","Data":"41cb5505624f5ee3ab4ddceca5e908e2e8d12aaf7145852eec077ff9c0fd2fad"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.216008 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerStarted","Data":"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39"} Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.739467 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.739528 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.739579 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.740150 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:19:40 crc kubenswrapper[4865]: I0103 04:19:40.740276 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab" gracePeriod=600 Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.149113 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sqb86"] Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.221705 4865 generic.go:334] "Generic (PLEG): container finished" podID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerID="41cb5505624f5ee3ab4ddceca5e908e2e8d12aaf7145852eec077ff9c0fd2fad" exitCode=0 Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.221778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerDied","Data":"41cb5505624f5ee3ab4ddceca5e908e2e8d12aaf7145852eec077ff9c0fd2fad"} Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.223298 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerID="789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39" exitCode=0 Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.223355 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerDied","Data":"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39"} Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.239834 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab" exitCode=0 Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.239919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab"} Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.252716 4865 generic.go:334] "Generic (PLEG): container finished" podID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerID="8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef" exitCode=0 Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.252764 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerDied","Data":"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef"} Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.802687 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.802752 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:19:41 crc kubenswrapper[4865]: I0103 04:19:41.866133 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.266995 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerStarted","Data":"0d4e0723692f44faa417456ab0b48be86f5c3300761468957d57fe88a8e31223"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.269774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerStarted","Data":"0df91617e4976f335f1f576b44d3adbc5f6d203c4ee55406bf72a000286af7d3"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.272105 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerStarted","Data":"db2295293ccb3fd2740beeb26361ecf22580db1d9ce041fe433c3ae0cc1c3be2"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.274640 4865 generic.go:334] "Generic (PLEG): container finished" podID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerID="06e6306f53bfcbc8b94b04a0e852b6ba2629d04dcc743da09b0a0319ead2b9c0" exitCode=0 Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.274705 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerDied","Data":"06e6306f53bfcbc8b94b04a0e852b6ba2629d04dcc743da09b0a0319ead2b9c0"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.276653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerStarted","Data":"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.290657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.300366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerStarted","Data":"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5"} Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.314191 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ldq6h" podStartSLOduration=4.079763613 podStartE2EDuration="1m3.314175012s" podCreationTimestamp="2026-01-03 04:18:39 +0000 UTC" firstStartedPulling="2026-01-03 04:18:42.0154907 +0000 UTC m=+149.132543885" lastFinishedPulling="2026-01-03 04:19:41.249902099 +0000 UTC m=+208.366955284" observedRunningTime="2026-01-03 04:19:42.311573958 +0000 UTC m=+209.428627143" watchObservedRunningTime="2026-01-03 04:19:42.314175012 +0000 UTC m=+209.431228197" Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.333564 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98z6f" podStartSLOduration=3.221791316 podStartE2EDuration="1m1.333549091s" podCreationTimestamp="2026-01-03 04:18:41 +0000 UTC" firstStartedPulling="2026-01-03 04:18:43.051650674 +0000 UTC m=+150.168703859" lastFinishedPulling="2026-01-03 04:19:41.163408439 +0000 UTC m=+208.280461634" observedRunningTime="2026-01-03 04:19:42.332694027 +0000 UTC m=+209.449747222" watchObservedRunningTime="2026-01-03 04:19:42.333549091 +0000 UTC m=+209.450602276" Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.346769 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.384407 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zt55" podStartSLOduration=2.508026995 podStartE2EDuration="1m0.384379651s" podCreationTimestamp="2026-01-03 04:18:42 +0000 UTC" firstStartedPulling="2026-01-03 04:18:44.096586512 +0000 UTC m=+151.213639697" lastFinishedPulling="2026-01-03 04:19:41.972939168 +0000 UTC m=+209.089992353" observedRunningTime="2026-01-03 04:19:42.382741214 +0000 UTC m=+209.499794399" watchObservedRunningTime="2026-01-03 04:19:42.384379651 +0000 UTC m=+209.501432836" Jan 03 04:19:42 crc kubenswrapper[4865]: I0103 04:19:42.411504 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4c5k" podStartSLOduration=2.687803754 podStartE2EDuration="1m3.411488009s" podCreationTimestamp="2026-01-03 04:18:39 +0000 UTC" firstStartedPulling="2026-01-03 04:18:40.921979314 +0000 UTC m=+148.039032499" lastFinishedPulling="2026-01-03 04:19:41.645663559 +0000 UTC m=+208.762716754" observedRunningTime="2026-01-03 04:19:42.410439209 +0000 UTC m=+209.527492394" watchObservedRunningTime="2026-01-03 04:19:42.411488009 +0000 UTC m=+209.528541184" Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.061067 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.061339 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.305073 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0efa00a-22e7-456b-bf83-862b22519818" containerID="0d4e0723692f44faa417456ab0b48be86f5c3300761468957d57fe88a8e31223" exitCode=0 Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.305140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerDied","Data":"0d4e0723692f44faa417456ab0b48be86f5c3300761468957d57fe88a8e31223"} Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.307154 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerStarted","Data":"d859ef04373041b7de3034c3e3d5fded9c54c9c4cea31e5c0c2f850910b7f59e"} Jan 03 04:19:43 crc kubenswrapper[4865]: I0103 04:19:43.350824 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6hjp" podStartSLOduration=2.642446652 podStartE2EDuration="1m3.350808263s" podCreationTimestamp="2026-01-03 04:18:40 +0000 UTC" firstStartedPulling="2026-01-03 04:18:42.020076664 +0000 UTC m=+149.137129849" lastFinishedPulling="2026-01-03 04:19:42.728438285 +0000 UTC m=+209.845491460" observedRunningTime="2026-01-03 04:19:43.346968454 +0000 UTC m=+210.464021639" watchObservedRunningTime="2026-01-03 04:19:43.350808263 +0000 UTC m=+210.467861448" Jan 03 04:19:44 crc kubenswrapper[4865]: I0103 04:19:44.099015 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zt55" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="registry-server" probeResult="failure" output=< Jan 03 04:19:44 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 04:19:44 crc kubenswrapper[4865]: > Jan 03 04:19:44 crc kubenswrapper[4865]: I0103 04:19:44.315248 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerStarted","Data":"3482ff57dfcbab9476590c18a386642eb764a989f14c46960337108849028e4c"} Jan 03 04:19:44 crc kubenswrapper[4865]: I0103 04:19:44.340902 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmd79" podStartSLOduration=2.692678178 podStartE2EDuration="1m1.340887465s" podCreationTimestamp="2026-01-03 04:18:43 +0000 UTC" firstStartedPulling="2026-01-03 04:18:45.11797703 +0000 UTC m=+152.235030215" lastFinishedPulling="2026-01-03 04:19:43.766186317 +0000 UTC m=+210.883239502" observedRunningTime="2026-01-03 04:19:44.339219248 +0000 UTC m=+211.456272443" watchObservedRunningTime="2026-01-03 04:19:44.340887465 +0000 UTC m=+211.457940650" Jan 03 04:19:49 crc kubenswrapper[4865]: I0103 04:19:49.846841 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:19:49 crc kubenswrapper[4865]: I0103 04:19:49.847509 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:19:49 crc kubenswrapper[4865]: I0103 04:19:49.891542 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.040555 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.040667 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.099338 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.223934 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.223978 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.268374 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.389003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.410236 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.425553 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.446960 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.447022 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:50 crc kubenswrapper[4865]: I0103 04:19:50.519299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:51 crc kubenswrapper[4865]: I0103 04:19:51.426871 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:51 crc kubenswrapper[4865]: I0103 04:19:51.533676 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p8hk7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 04:19:51 crc kubenswrapper[4865]: I0103 04:19:51.533764 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8hk7" podUID="3fbd343a-070a-4b55-b3f9-37114883bbbb" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.169235 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.266670 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.267209 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.334503 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.365890 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ldq6h" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="registry-server" containerID="cri-o://db2295293ccb3fd2740beeb26361ecf22580db1d9ce041fe433c3ae0cc1c3be2" gracePeriod=2 Jan 03 04:19:52 crc kubenswrapper[4865]: I0103 04:19:52.763131 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.151111 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.217842 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.371256 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6hjp" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="registry-server" containerID="cri-o://d859ef04373041b7de3034c3e3d5fded9c54c9c4cea31e5c0c2f850910b7f59e" gracePeriod=2 Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.428533 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-98z6f" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="registry-server" probeResult="failure" output=< Jan 03 04:19:53 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 04:19:53 crc kubenswrapper[4865]: > Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.469716 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.469758 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.474323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:53 crc kubenswrapper[4865]: I0103 04:19:53.517225 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:54 crc kubenswrapper[4865]: I0103 04:19:54.381620 4865 generic.go:334] "Generic (PLEG): container finished" podID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerID="db2295293ccb3fd2740beeb26361ecf22580db1d9ce041fe433c3ae0cc1c3be2" exitCode=0 Jan 03 04:19:54 crc kubenswrapper[4865]: I0103 04:19:54.381910 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerDied","Data":"db2295293ccb3fd2740beeb26361ecf22580db1d9ce041fe433c3ae0cc1c3be2"} Jan 03 04:19:54 crc kubenswrapper[4865]: I0103 04:19:54.392903 4865 generic.go:334] "Generic (PLEG): container finished" podID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerID="d859ef04373041b7de3034c3e3d5fded9c54c9c4cea31e5c0c2f850910b7f59e" exitCode=0 Jan 03 04:19:54 crc kubenswrapper[4865]: I0103 04:19:54.392964 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerDied","Data":"d859ef04373041b7de3034c3e3d5fded9c54c9c4cea31e5c0c2f850910b7f59e"} Jan 03 04:19:54 crc kubenswrapper[4865]: I0103 04:19:54.463346 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.170529 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.397509 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98z6f" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="registry-server" containerID="cri-o://0df91617e4976f335f1f576b44d3adbc5f6d203c4ee55406bf72a000286af7d3" gracePeriod=2 Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.665348 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.671004 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853058 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities\") pod \"201f9ab0-bcbf-4497-9a16-c33765209c84\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853206 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d67qz\" (UniqueName: \"kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz\") pod \"6285a29b-28c2-4552-bb1b-3d31610858a2\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content\") pod \"201f9ab0-bcbf-4497-9a16-c33765209c84\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853321 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77tzj\" (UniqueName: \"kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj\") pod \"201f9ab0-bcbf-4497-9a16-c33765209c84\" (UID: \"201f9ab0-bcbf-4497-9a16-c33765209c84\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853448 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities\") pod \"6285a29b-28c2-4552-bb1b-3d31610858a2\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.853530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content\") pod \"6285a29b-28c2-4552-bb1b-3d31610858a2\" (UID: \"6285a29b-28c2-4552-bb1b-3d31610858a2\") " Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.854544 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities" (OuterVolumeSpecName: "utilities") pod "201f9ab0-bcbf-4497-9a16-c33765209c84" (UID: "201f9ab0-bcbf-4497-9a16-c33765209c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.856550 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities" (OuterVolumeSpecName: "utilities") pod "6285a29b-28c2-4552-bb1b-3d31610858a2" (UID: "6285a29b-28c2-4552-bb1b-3d31610858a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.861737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz" (OuterVolumeSpecName: "kube-api-access-d67qz") pod "6285a29b-28c2-4552-bb1b-3d31610858a2" (UID: "6285a29b-28c2-4552-bb1b-3d31610858a2"). InnerVolumeSpecName "kube-api-access-d67qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.872588 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj" (OuterVolumeSpecName: "kube-api-access-77tzj") pod "201f9ab0-bcbf-4497-9a16-c33765209c84" (UID: "201f9ab0-bcbf-4497-9a16-c33765209c84"). InnerVolumeSpecName "kube-api-access-77tzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.956020 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77tzj\" (UniqueName: \"kubernetes.io/projected/201f9ab0-bcbf-4497-9a16-c33765209c84-kube-api-access-77tzj\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.956434 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.957325 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:55 crc kubenswrapper[4865]: I0103 04:19:55.957548 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d67qz\" (UniqueName: \"kubernetes.io/projected/6285a29b-28c2-4552-bb1b-3d31610858a2-kube-api-access-d67qz\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.016291 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "201f9ab0-bcbf-4497-9a16-c33765209c84" (UID: "201f9ab0-bcbf-4497-9a16-c33765209c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.058334 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201f9ab0-bcbf-4497-9a16-c33765209c84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.409049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq6h" event={"ID":"201f9ab0-bcbf-4497-9a16-c33765209c84","Type":"ContainerDied","Data":"32d7bced6f6c3265960e36c514c115d5c9df989ecf4c7799dbdd6716409c4701"} Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.409217 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq6h" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.410310 4865 scope.go:117] "RemoveContainer" containerID="db2295293ccb3fd2740beeb26361ecf22580db1d9ce041fe433c3ae0cc1c3be2" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.413495 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6hjp" event={"ID":"6285a29b-28c2-4552-bb1b-3d31610858a2","Type":"ContainerDied","Data":"272038690a3017bfdf40a20f4975a0fb3b3c8d711b49ffd38460f4a3fe503401"} Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.413686 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6hjp" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.417770 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerID="0df91617e4976f335f1f576b44d3adbc5f6d203c4ee55406bf72a000286af7d3" exitCode=0 Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.417834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerDied","Data":"0df91617e4976f335f1f576b44d3adbc5f6d203c4ee55406bf72a000286af7d3"} Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.434462 4865 scope.go:117] "RemoveContainer" containerID="41cb5505624f5ee3ab4ddceca5e908e2e8d12aaf7145852eec077ff9c0fd2fad" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.464311 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.467757 4865 scope.go:117] "RemoveContainer" containerID="cd1ceeaa11466171797f6b6b5558e49f492b7239d05443be191443de3bc5cb34" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.471785 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ldq6h"] Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.493073 4865 scope.go:117] "RemoveContainer" containerID="d859ef04373041b7de3034c3e3d5fded9c54c9c4cea31e5c0c2f850910b7f59e" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.528478 4865 scope.go:117] "RemoveContainer" containerID="06e6306f53bfcbc8b94b04a0e852b6ba2629d04dcc743da09b0a0319ead2b9c0" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.563873 4865 scope.go:117] "RemoveContainer" containerID="eb6aa70d8b58bce085c00b241c22276bf71dad6fbcce40adc842f7ed0ed31424" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.657478 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6285a29b-28c2-4552-bb1b-3d31610858a2" (UID: "6285a29b-28c2-4552-bb1b-3d31610858a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.667141 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6285a29b-28c2-4552-bb1b-3d31610858a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.761330 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:19:56 crc kubenswrapper[4865]: I0103 04:19:56.767649 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6hjp"] Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.167643 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" path="/var/lib/kubelet/pods/201f9ab0-bcbf-4497-9a16-c33765209c84/volumes" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.169362 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" path="/var/lib/kubelet/pods/6285a29b-28c2-4552-bb1b-3d31610858a2/volumes" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.378954 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.426748 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98z6f" event={"ID":"1b389eb7-6d2c-4e85-a3a1-88a517988670","Type":"ContainerDied","Data":"df4fa9795240ca28b89b5d8d70b63de4e77f488afdb432cba592b761ce12845e"} Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.426796 4865 scope.go:117] "RemoveContainer" containerID="0df91617e4976f335f1f576b44d3adbc5f6d203c4ee55406bf72a000286af7d3" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.426869 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98z6f" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.451431 4865 scope.go:117] "RemoveContainer" containerID="75be850328abf3ac7597845eadfc2c13e6e31c7ffab2954f4026b3513df01c24" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.475981 4865 scope.go:117] "RemoveContainer" containerID="45cad4386fd829a2ceccbe5f01ce615b754a58fc5b356f0e9ffc6f570c1f09ac" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.568736 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.572746 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmd79" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="registry-server" containerID="cri-o://3482ff57dfcbab9476590c18a386642eb764a989f14c46960337108849028e4c" gracePeriod=2 Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.580775 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities\") pod \"1b389eb7-6d2c-4e85-a3a1-88a517988670\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.580848 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfw6\" (UniqueName: \"kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6\") pod \"1b389eb7-6d2c-4e85-a3a1-88a517988670\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.580941 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content\") pod \"1b389eb7-6d2c-4e85-a3a1-88a517988670\" (UID: \"1b389eb7-6d2c-4e85-a3a1-88a517988670\") " Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.583175 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities" (OuterVolumeSpecName: "utilities") pod "1b389eb7-6d2c-4e85-a3a1-88a517988670" (UID: "1b389eb7-6d2c-4e85-a3a1-88a517988670"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.585795 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6" (OuterVolumeSpecName: "kube-api-access-lwfw6") pod "1b389eb7-6d2c-4e85-a3a1-88a517988670" (UID: "1b389eb7-6d2c-4e85-a3a1-88a517988670"). InnerVolumeSpecName "kube-api-access-lwfw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.622179 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b389eb7-6d2c-4e85-a3a1-88a517988670" (UID: "1b389eb7-6d2c-4e85-a3a1-88a517988670"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.685273 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.685319 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfw6\" (UniqueName: \"kubernetes.io/projected/1b389eb7-6d2c-4e85-a3a1-88a517988670-kube-api-access-lwfw6\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.685338 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b389eb7-6d2c-4e85-a3a1-88a517988670-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.773717 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:19:57 crc kubenswrapper[4865]: I0103 04:19:57.781054 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98z6f"] Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.442234 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0efa00a-22e7-456b-bf83-862b22519818" containerID="3482ff57dfcbab9476590c18a386642eb764a989f14c46960337108849028e4c" exitCode=0 Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.442277 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerDied","Data":"3482ff57dfcbab9476590c18a386642eb764a989f14c46960337108849028e4c"} Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.516009 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.697816 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7kwz\" (UniqueName: \"kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz\") pod \"e0efa00a-22e7-456b-bf83-862b22519818\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.699307 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities\") pod \"e0efa00a-22e7-456b-bf83-862b22519818\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.699369 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content\") pod \"e0efa00a-22e7-456b-bf83-862b22519818\" (UID: \"e0efa00a-22e7-456b-bf83-862b22519818\") " Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.700795 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities" (OuterVolumeSpecName: "utilities") pod "e0efa00a-22e7-456b-bf83-862b22519818" (UID: "e0efa00a-22e7-456b-bf83-862b22519818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.705796 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz" (OuterVolumeSpecName: "kube-api-access-r7kwz") pod "e0efa00a-22e7-456b-bf83-862b22519818" (UID: "e0efa00a-22e7-456b-bf83-862b22519818"). InnerVolumeSpecName "kube-api-access-r7kwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.800915 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.801344 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7kwz\" (UniqueName: \"kubernetes.io/projected/e0efa00a-22e7-456b-bf83-862b22519818-kube-api-access-r7kwz\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.836785 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0efa00a-22e7-456b-bf83-862b22519818" (UID: "e0efa00a-22e7-456b-bf83-862b22519818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:19:58 crc kubenswrapper[4865]: I0103 04:19:58.901930 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0efa00a-22e7-456b-bf83-862b22519818-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.165837 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" path="/var/lib/kubelet/pods/1b389eb7-6d2c-4e85-a3a1-88a517988670/volumes" Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.453239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmd79" event={"ID":"e0efa00a-22e7-456b-bf83-862b22519818","Type":"ContainerDied","Data":"ccbbed7951c78a8a0b50bf80a4194000776010b6d5c6ef21c5ab0ea7fc3577ef"} Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.453318 4865 scope.go:117] "RemoveContainer" containerID="3482ff57dfcbab9476590c18a386642eb764a989f14c46960337108849028e4c" Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.453493 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmd79" Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.484282 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.486336 4865 scope.go:117] "RemoveContainer" containerID="0d4e0723692f44faa417456ab0b48be86f5c3300761468957d57fe88a8e31223" Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.491103 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmd79"] Jan 03 04:19:59 crc kubenswrapper[4865]: I0103 04:19:59.512919 4865 scope.go:117] "RemoveContainer" containerID="4ed7d238316124b1e75ebf5fa03f2c731783c0012b7b1887fd7ec4fed24cf546" Jan 03 04:20:01 crc kubenswrapper[4865]: I0103 04:20:01.167726 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0efa00a-22e7-456b-bf83-862b22519818" path="/var/lib/kubelet/pods/e0efa00a-22e7-456b-bf83-862b22519818/volumes" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.080670 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081523 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081550 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081567 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081580 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081601 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081614 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081631 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081643 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081666 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081680 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081699 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081711 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081729 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081741 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="extract-utilities" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081763 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792b802b-75ff-4a5b-94ae-13e863b98d7b" containerName="pruner" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081778 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="792b802b-75ff-4a5b-94ae-13e863b98d7b" containerName="pruner" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081792 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081804 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081823 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081834 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081852 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081865 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081879 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081891 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="extract-content" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.081910 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.081922 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082096 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0efa00a-22e7-456b-bf83-862b22519818" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082120 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b389eb7-6d2c-4e85-a3a1-88a517988670" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082136 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="792b802b-75ff-4a5b-94ae-13e863b98d7b" containerName="pruner" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082160 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6285a29b-28c2-4552-bb1b-3d31610858a2" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082177 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="201f9ab0-bcbf-4497-9a16-c33765209c84" containerName="registry-server" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082703 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.082951 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083223 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083403 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083426 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083489 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083576 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083705 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.083979 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.083999 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084009 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084014 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084032 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084038 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084046 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084051 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084061 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084068 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084113 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084122 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084238 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084251 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084258 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084267 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084274 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084283 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 04:20:06 crc kubenswrapper[4865]: E0103 04:20:06.084364 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.084371 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.102988 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103112 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103264 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103341 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103518 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103591 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.103659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.204342 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerName="oauth-openshift" containerID="cri-o://5d22cb4d6a03a5e2226c7c0f790b1667a67ceace4b6b1661b51c62f4907ee641" gracePeriod=15 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205372 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205823 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205836 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205863 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205885 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205942 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.205967 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.206016 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.206040 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.206045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.206198 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.206290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.504971 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.506097 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.506937 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18" exitCode=0 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.506987 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288" exitCode=0 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.507008 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a" exitCode=0 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.507026 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637" exitCode=2 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.507092 4865 scope.go:117] "RemoveContainer" containerID="6b076ab97baea61eaeec8a06a06c61699cde99480b8ae9d7f9e469c261abe19e" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.509522 4865 generic.go:334] "Generic (PLEG): container finished" podID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerID="5d22cb4d6a03a5e2226c7c0f790b1667a67ceace4b6b1661b51c62f4907ee641" exitCode=0 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.509636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" event={"ID":"ab146386-688e-4e0b-acf7-ee0d9c087d25","Type":"ContainerDied","Data":"5d22cb4d6a03a5e2226c7c0f790b1667a67ceace4b6b1661b51c62f4907ee641"} Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.512441 4865 generic.go:334] "Generic (PLEG): container finished" podID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" containerID="17eaa539f0021eaed9d1e2b84c57c424a156ae5cbf0b87ec51c2f2d344d12de0" exitCode=0 Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.512486 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"077ab4a6-779a-4573-bb2f-9ada89c60a3f","Type":"ContainerDied","Data":"17eaa539f0021eaed9d1e2b84c57c424a156ae5cbf0b87ec51c2f2d344d12de0"} Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.513674 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.514055 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.554899 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.555569 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.555926 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.556353 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.712548 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.712919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.712984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713023 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713108 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdkx7\" (UniqueName: \"kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713169 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713218 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713311 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713355 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713443 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.713473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.714116 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.714181 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.714483 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.714694 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig\") pod \"ab146386-688e-4e0b-acf7-ee0d9c087d25\" (UID: \"ab146386-688e-4e0b-acf7-ee0d9c087d25\") " Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.714861 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.715221 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.715247 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab146386-688e-4e0b-acf7-ee0d9c087d25-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.715264 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.715288 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.716316 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.721157 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.721230 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.721529 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.721964 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.722200 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.722287 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7" (OuterVolumeSpecName: "kube-api-access-bdkx7") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "kube-api-access-bdkx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.722636 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.722829 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.723003 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ab146386-688e-4e0b-acf7-ee0d9c087d25" (UID: "ab146386-688e-4e0b-acf7-ee0d9c087d25"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816315 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816378 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816446 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816476 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816506 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816533 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816558 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdkx7\" (UniqueName: \"kubernetes.io/projected/ab146386-688e-4e0b-acf7-ee0d9c087d25-kube-api-access-bdkx7\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816584 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816611 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:06 crc kubenswrapper[4865]: I0103 04:20:06.816640 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab146386-688e-4e0b-acf7-ee0d9c087d25-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.524165 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.528233 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" event={"ID":"ab146386-688e-4e0b-acf7-ee0d9c087d25","Type":"ContainerDied","Data":"b8a09cd79b5ecc43264e18ff7850d75e79197be294bf85d3f9e582fbb553df05"} Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.528271 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.528311 4865 scope.go:117] "RemoveContainer" containerID="5d22cb4d6a03a5e2226c7c0f790b1667a67ceace4b6b1661b51c62f4907ee641" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.529645 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.531006 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.536175 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.536725 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.823054 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.824008 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.824475 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930259 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock\") pod \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930423 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir\") pod \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930438 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock" (OuterVolumeSpecName: "var-lock") pod "077ab4a6-779a-4573-bb2f-9ada89c60a3f" (UID: "077ab4a6-779a-4573-bb2f-9ada89c60a3f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930474 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access\") pod \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\" (UID: \"077ab4a6-779a-4573-bb2f-9ada89c60a3f\") " Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930493 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "077ab4a6-779a-4573-bb2f-9ada89c60a3f" (UID: "077ab4a6-779a-4573-bb2f-9ada89c60a3f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930777 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.930797 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/077ab4a6-779a-4573-bb2f-9ada89c60a3f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:07 crc kubenswrapper[4865]: I0103 04:20:07.935835 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "077ab4a6-779a-4573-bb2f-9ada89c60a3f" (UID: "077ab4a6-779a-4573-bb2f-9ada89c60a3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.032314 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/077ab4a6-779a-4573-bb2f-9ada89c60a3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.465917 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.467597 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.468377 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.468877 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.469362 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.539793 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.541313 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd" exitCode=0 Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.541449 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.541535 4865 scope.go:117] "RemoveContainer" containerID="c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.546405 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"077ab4a6-779a-4573-bb2f-9ada89c60a3f","Type":"ContainerDied","Data":"5bd32f1014fd6cbdcb2ae52556ca4745441118ffc92703f590156fce1701dbd1"} Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.546463 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd32f1014fd6cbdcb2ae52556ca4745441118ffc92703f590156fce1701dbd1" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.546504 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560133 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560298 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560713 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560829 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.560970 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.568972 4865 scope.go:117] "RemoveContainer" containerID="598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.570995 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.571992 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.572528 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.591489 4865 scope.go:117] "RemoveContainer" containerID="46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.610190 4865 scope.go:117] "RemoveContainer" containerID="2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.633476 4865 scope.go:117] "RemoveContainer" containerID="8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.656162 4865 scope.go:117] "RemoveContainer" containerID="00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.662491 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.662532 4865 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.662551 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.684215 4865 scope.go:117] "RemoveContainer" containerID="c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.684940 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\": container with ID starting with c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18 not found: ID does not exist" containerID="c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.685001 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18"} err="failed to get container status \"c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\": rpc error: code = NotFound desc = could not find container \"c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18\": container with ID starting with c4974c72077ef7557147ec69278c0804ba4fb954ab5f20c30c4b206c1cd72e18 not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.685050 4865 scope.go:117] "RemoveContainer" containerID="598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.685745 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\": container with ID starting with 598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288 not found: ID does not exist" containerID="598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.685802 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288"} err="failed to get container status \"598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\": rpc error: code = NotFound desc = could not find container \"598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288\": container with ID starting with 598d94f947ade71113f99347e7e5185015d55aad831fbd765c8dce1e66802288 not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.685841 4865 scope.go:117] "RemoveContainer" containerID="46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.686457 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\": container with ID starting with 46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a not found: ID does not exist" containerID="46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.686621 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a"} err="failed to get container status \"46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\": rpc error: code = NotFound desc = could not find container \"46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a\": container with ID starting with 46912b44012d1614f7016e34daee4ae3de84732ad3a8184b81c7910c0683177a not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.686763 4865 scope.go:117] "RemoveContainer" containerID="2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.687498 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\": container with ID starting with 2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637 not found: ID does not exist" containerID="2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.687554 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637"} err="failed to get container status \"2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\": rpc error: code = NotFound desc = could not find container \"2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637\": container with ID starting with 2b1d8d2853fac0eb822c46deb7d221d54211b9f48ff89bbaf7b58d7313e01637 not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.687590 4865 scope.go:117] "RemoveContainer" containerID="8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.688108 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\": container with ID starting with 8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd not found: ID does not exist" containerID="8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.688162 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd"} err="failed to get container status \"8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\": rpc error: code = NotFound desc = could not find container \"8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd\": container with ID starting with 8fd70ded3f8501f830780649973bfe891005e2123b6b6860cca01f4dae6787fd not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.688202 4865 scope.go:117] "RemoveContainer" containerID="00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d" Jan 03 04:20:08 crc kubenswrapper[4865]: E0103 04:20:08.688774 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\": container with ID starting with 00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d not found: ID does not exist" containerID="00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.688981 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d"} err="failed to get container status \"00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\": rpc error: code = NotFound desc = could not find container \"00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d\": container with ID starting with 00fd8b3d927e6cd5e8e0ab9f9d891e66fe0f94cfe0117dc9eb79c2a2305a2c9d not found: ID does not exist" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.868572 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.869574 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:08 crc kubenswrapper[4865]: I0103 04:20:08.870196 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:09 crc kubenswrapper[4865]: E0103 04:20:09.162324 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" volumeName="registry-storage" Jan 03 04:20:09 crc kubenswrapper[4865]: I0103 04:20:09.171475 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 03 04:20:11 crc kubenswrapper[4865]: E0103 04:20:11.143202 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:11 crc kubenswrapper[4865]: I0103 04:20:11.143929 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:11 crc kubenswrapper[4865]: E0103 04:20:11.177211 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18871dadd338f68a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 04:20:11.176482442 +0000 UTC m=+238.293535627,LastTimestamp:2026-01-03 04:20:11.176482442 +0000 UTC m=+238.293535627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 04:20:11 crc kubenswrapper[4865]: I0103 04:20:11.578051 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a01730a68b6b8398328602dbcf59ceef807f2c01f11e9f9662942bfa55ef103a"} Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.510672 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.511189 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.511625 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.511969 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.513106 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: I0103 04:20:12.513158 4865 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.513692 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Jan 03 04:20:12 crc kubenswrapper[4865]: I0103 04:20:12.587125 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba00729e584a6e0fafb58a7b2d55d71836103d645aca8d27a69170b706d0e52f"} Jan 03 04:20:12 crc kubenswrapper[4865]: I0103 04:20:12.587935 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.587982 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:12 crc kubenswrapper[4865]: I0103 04:20:12.588466 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.714650 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Jan 03 04:20:12 crc kubenswrapper[4865]: E0103 04:20:12.750111 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18871dadd338f68a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 04:20:11.176482442 +0000 UTC m=+238.293535627,LastTimestamp:2026-01-03 04:20:11.176482442 +0000 UTC m=+238.293535627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 04:20:13 crc kubenswrapper[4865]: E0103 04:20:13.116637 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Jan 03 04:20:13 crc kubenswrapper[4865]: I0103 04:20:13.160613 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:13 crc kubenswrapper[4865]: I0103 04:20:13.161105 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:13 crc kubenswrapper[4865]: E0103 04:20:13.593968 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:13 crc kubenswrapper[4865]: E0103 04:20:13.917768 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Jan 03 04:20:15 crc kubenswrapper[4865]: E0103 04:20:15.518518 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.154804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.155736 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.156320 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.179821 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.179872 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:18 crc kubenswrapper[4865]: E0103 04:20:18.180490 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.181184 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:18 crc kubenswrapper[4865]: W0103 04:20:18.220533 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-693483e66b3357a61a5f348d6b3a11d349493c268c2bb50df8d8e8d8504c5eae WatchSource:0}: Error finding container 693483e66b3357a61a5f348d6b3a11d349493c268c2bb50df8d8e8d8504c5eae: Status 404 returned error can't find the container with id 693483e66b3357a61a5f348d6b3a11d349493c268c2bb50df8d8e8d8504c5eae Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.625683 4865 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0de95e0346453b0be4928fbf5c7bd0e6b44d33b7f329a387cb9e5407b0c59f89" exitCode=0 Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.625743 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0de95e0346453b0be4928fbf5c7bd0e6b44d33b7f329a387cb9e5407b0c59f89"} Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.625791 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"693483e66b3357a61a5f348d6b3a11d349493c268c2bb50df8d8e8d8504c5eae"} Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.626259 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.626297 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:18 crc kubenswrapper[4865]: E0103 04:20:18.626963 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.626974 4865 status_manager.go:851] "Failed to get status for pod" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" pod="openshift-authentication/oauth-openshift-558db77b4-sqb86" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sqb86\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:18 crc kubenswrapper[4865]: I0103 04:20:18.627735 4865 status_manager.go:851] "Failed to get status for pod" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Jan 03 04:20:18 crc kubenswrapper[4865]: E0103 04:20:18.720343 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Jan 03 04:20:19 crc kubenswrapper[4865]: I0103 04:20:19.632730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"331ae13edd0cf149d8a7656cddbe3c3b8b3f696e2c472e2a2bd59030830a99ef"} Jan 03 04:20:19 crc kubenswrapper[4865]: I0103 04:20:19.634267 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10ef54ad059444628057415fdf719da8ca36c0c352e7744ac9d75398df1b9f30"} Jan 03 04:20:19 crc kubenswrapper[4865]: I0103 04:20:19.634460 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed5b4faf5d3091cc3c6c8312b213ea622f440e488d9fd9d667731d051a128c6d"} Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.639920 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.639962 4865 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642" exitCode=1 Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.640010 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642"} Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.640434 4865 scope.go:117] "RemoveContainer" containerID="71caf825a2d972b9e1adb3336f524d74f088e1aecbf349b0931da165a7678642" Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.644355 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3c55f534bd50b901eb015b59bf9ae69564ad0922c76bf3f80a50d3d74f8a2470"} Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.644585 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.644698 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b75750ae9d59212f7a4c5638fedc34b8d2659f5819d47084962e6758cbfbc1d9"} Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.644647 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:20 crc kubenswrapper[4865]: I0103 04:20:20.644902 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:21 crc kubenswrapper[4865]: I0103 04:20:21.654650 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 04:20:21 crc kubenswrapper[4865]: I0103 04:20:21.655018 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18abe35ae91eabf4dcde5794d88e24e36e45324794bb4774ce6739c9575ef206"} Jan 03 04:20:23 crc kubenswrapper[4865]: I0103 04:20:23.181816 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:23 crc kubenswrapper[4865]: I0103 04:20:23.182295 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:23 crc kubenswrapper[4865]: I0103 04:20:23.189279 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:25 crc kubenswrapper[4865]: I0103 04:20:25.669285 4865 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:25 crc kubenswrapper[4865]: I0103 04:20:25.825722 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2a58d4b2-83a0-4381-8bc6-fd6957b93e11" Jan 03 04:20:26 crc kubenswrapper[4865]: I0103 04:20:26.689841 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:26 crc kubenswrapper[4865]: I0103 04:20:26.690186 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:26 crc kubenswrapper[4865]: I0103 04:20:26.692957 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2a58d4b2-83a0-4381-8bc6-fd6957b93e11" Jan 03 04:20:26 crc kubenswrapper[4865]: I0103 04:20:26.697245 4865 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ed5b4faf5d3091cc3c6c8312b213ea622f440e488d9fd9d667731d051a128c6d" Jan 03 04:20:26 crc kubenswrapper[4865]: I0103 04:20:26.697286 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:27 crc kubenswrapper[4865]: I0103 04:20:27.695275 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:27 crc kubenswrapper[4865]: I0103 04:20:27.695326 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:27 crc kubenswrapper[4865]: I0103 04:20:27.700233 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2a58d4b2-83a0-4381-8bc6-fd6957b93e11" Jan 03 04:20:28 crc kubenswrapper[4865]: I0103 04:20:28.311266 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:20:29 crc kubenswrapper[4865]: I0103 04:20:29.372816 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:20:29 crc kubenswrapper[4865]: I0103 04:20:29.380558 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:20:37 crc kubenswrapper[4865]: I0103 04:20:37.665281 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.038766 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.131042 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.208927 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.306476 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.318785 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.472314 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.497214 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.765277 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 04:20:38 crc kubenswrapper[4865]: I0103 04:20:38.897048 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.011063 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.091044 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.385009 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.422313 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.425616 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.570020 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.883886 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.909780 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.911302 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.969721 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 04:20:39 crc kubenswrapper[4865]: I0103 04:20:39.974594 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.073093 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.094973 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.118626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.187527 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.225072 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.257778 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.296806 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.403142 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.419215 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.438805 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.505647 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.510242 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.630199 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.956141 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.956852 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 04:20:40 crc kubenswrapper[4865]: I0103 04:20:40.992608 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.013132 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.068259 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.227458 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.266924 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.270093 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.328545 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.341199 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.611932 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.626045 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.634871 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.683869 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.701974 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.742045 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.756267 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.772001 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.863784 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.897246 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.897747 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.992237 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 04:20:41 crc kubenswrapper[4865]: I0103 04:20:41.996813 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.004929 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.060324 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.071109 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.096571 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.125472 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.133461 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.205608 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.262417 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.285311 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.293964 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.387168 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.452957 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.491761 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.553128 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.553455 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.623250 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.748361 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.794818 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 04:20:42 crc kubenswrapper[4865]: I0103 04:20:42.960604 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.007571 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.090500 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.145123 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.151422 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.184149 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.189625 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.213071 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.221671 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.278349 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.280298 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.376745 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.441076 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.459631 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.513220 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.545030 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.567233 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.583655 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.675710 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.701734 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.803716 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.840186 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.854083 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.863260 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 04:20:43 crc kubenswrapper[4865]: I0103 04:20:43.891501 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.015595 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.070865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.116712 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.196151 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.233553 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.283674 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.338089 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.407303 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.456473 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.527505 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.542818 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.692624 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.710671 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.829447 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 04:20:44 crc kubenswrapper[4865]: I0103 04:20:44.918618 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.080105 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.088897 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.111702 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.117821 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.150460 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.207941 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.211814 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.228189 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.228205 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.235866 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.305028 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.513646 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.543933 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.570119 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.650256 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.732183 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.770333 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.813682 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 04:20:45 crc kubenswrapper[4865]: I0103 04:20:45.895365 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.003592 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.046141 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.450493 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.536929 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.717425 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.795931 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.847555 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.901846 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 04:20:46 crc kubenswrapper[4865]: I0103 04:20:46.944791 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.191156 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.245330 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.370488 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.439037 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.450291 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.459291 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.460619 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.463728 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.666398 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.769751 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.783202 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.844137 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.870044 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.878587 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 04:20:47 crc kubenswrapper[4865]: I0103 04:20:47.979664 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.013376 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.124485 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.126598 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.130949 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-sqb86"] Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131012 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b788bb46c-df5gk","openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 04:20:48 crc kubenswrapper[4865]: E0103 04:20:48.131241 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerName="oauth-openshift" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131261 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerName="oauth-openshift" Jan 03 04:20:48 crc kubenswrapper[4865]: E0103 04:20:48.131279 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" containerName="installer" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131287 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" containerName="installer" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131426 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" containerName="oauth-openshift" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131454 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="077ab4a6-779a-4573-bb2f-9ada89c60a3f" containerName="installer" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131746 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131825 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ec06e4a8-7a39-4921-8852-0fcb3035f15e" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.131800 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.134854 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.135063 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.135370 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.135633 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.135767 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.135941 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.137644 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.138915 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.139674 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.140210 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.140923 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.140926 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.147319 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.149857 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.153142 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.155786 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.165941 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.165921593 podStartE2EDuration="23.165921593s" podCreationTimestamp="2026-01-03 04:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:20:48.163007465 +0000 UTC m=+275.280060660" watchObservedRunningTime="2026-01-03 04:20:48.165921593 +0000 UTC m=+275.282974788" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.166854 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.186646 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.236788 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-session\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244729 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244759 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9j9n\" (UniqueName: \"kubernetes.io/projected/6cd14c2e-6295-4f86-af16-b35921533c92-kube-api-access-x9j9n\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244786 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244817 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244931 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.244982 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-login\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245043 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-error\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245068 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245110 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-audit-policies\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.245156 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cd14c2e-6295-4f86-af16-b35921533c92-audit-dir\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346288 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346374 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-login\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346421 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-error\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346439 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346504 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-audit-policies\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cd14c2e-6295-4f86-af16-b35921533c92-audit-dir\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346550 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-session\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9j9n\" (UniqueName: \"kubernetes.io/projected/6cd14c2e-6295-4f86-af16-b35921533c92-kube-api-access-x9j9n\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.346645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.347220 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.347528 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.348299 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.348369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cd14c2e-6295-4f86-af16-b35921533c92-audit-dir\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.348487 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cd14c2e-6295-4f86-af16-b35921533c92-audit-policies\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.357953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-error\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.358820 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.359061 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.359093 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.359703 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-session\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.359851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.360362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.376200 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cd14c2e-6295-4f86-af16-b35921533c92-v4-0-config-user-template-login\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.381508 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.387166 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9j9n\" (UniqueName: \"kubernetes.io/projected/6cd14c2e-6295-4f86-af16-b35921533c92-kube-api-access-x9j9n\") pod \"oauth-openshift-6b788bb46c-df5gk\" (UID: \"6cd14c2e-6295-4f86-af16-b35921533c92\") " pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.390639 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.390902 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ba00729e584a6e0fafb58a7b2d55d71836103d645aca8d27a69170b706d0e52f" gracePeriod=5 Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.404458 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.404947 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.443870 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.447670 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.459552 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.577012 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.585980 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.596175 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.629440 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b788bb46c-df5gk"] Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.833487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" event={"ID":"6cd14c2e-6295-4f86-af16-b35921533c92","Type":"ContainerStarted","Data":"d2a3aee92efd1c92bf1c028f5063d896d253aa4c9b468209ac22ba851d13c7ab"} Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.883029 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 04:20:48 crc kubenswrapper[4865]: I0103 04:20:48.966647 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.038492 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.090520 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.163234 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab146386-688e-4e0b-acf7-ee0d9c087d25" path="/var/lib/kubelet/pods/ab146386-688e-4e0b-acf7-ee0d9c087d25/volumes" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.270365 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.285206 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.337007 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.457181 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.582404 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.582444 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.599177 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.673082 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.710429 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.729151 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.768220 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.840415 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" event={"ID":"6cd14c2e-6295-4f86-af16-b35921533c92","Type":"ContainerStarted","Data":"90ff087bf3538420de115c6f0f2adc91c7215c9385adabcfd44c1479b45027b5"} Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.840844 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.841315 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.848101 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.865767 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b788bb46c-df5gk" podStartSLOduration=68.865749798 podStartE2EDuration="1m8.865749798s" podCreationTimestamp="2026-01-03 04:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:20:49.864947635 +0000 UTC m=+276.982000830" watchObservedRunningTime="2026-01-03 04:20:49.865749798 +0000 UTC m=+276.982802983" Jan 03 04:20:49 crc kubenswrapper[4865]: I0103 04:20:49.964145 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.010037 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.249365 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.293966 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.379189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.441556 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.461309 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.499820 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.529345 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.531651 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.547548 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.656515 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.757627 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.832453 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.873076 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.937060 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.975591 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.985449 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 04:20:50 crc kubenswrapper[4865]: I0103 04:20:50.988767 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.019114 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.044587 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.085879 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.133454 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.164822 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.172413 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.200702 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.267716 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.399653 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.499705 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.513448 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.558941 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.616972 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.638135 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.651802 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.707327 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.712654 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.736340 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.885522 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 04:20:51 crc kubenswrapper[4865]: I0103 04:20:51.973253 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.038701 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.289445 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.351987 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.369071 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.517321 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.526787 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 04:20:52 crc kubenswrapper[4865]: I0103 04:20:52.982526 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 04:20:53 crc kubenswrapper[4865]: I0103 04:20:53.851081 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 04:20:53 crc kubenswrapper[4865]: I0103 04:20:53.866230 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 04:20:53 crc kubenswrapper[4865]: I0103 04:20:53.866482 4865 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ba00729e584a6e0fafb58a7b2d55d71836103d645aca8d27a69170b706d0e52f" exitCode=137 Jan 03 04:20:53 crc kubenswrapper[4865]: I0103 04:20:53.972446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 04:20:53 crc kubenswrapper[4865]: I0103 04:20:53.972525 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036085 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036145 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036203 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036219 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036247 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036307 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036329 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036373 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.036982 4865 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.037015 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.037039 4865 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.037060 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.042806 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.138630 4865 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.449637 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.876134 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.876232 4865 scope.go:117] "RemoveContainer" containerID="ba00729e584a6e0fafb58a7b2d55d71836103d645aca8d27a69170b706d0e52f" Jan 03 04:20:54 crc kubenswrapper[4865]: I0103 04:20:54.876344 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 04:20:55 crc kubenswrapper[4865]: I0103 04:20:55.165740 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 03 04:20:55 crc kubenswrapper[4865]: I0103 04:20:55.368857 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 04:20:55 crc kubenswrapper[4865]: I0103 04:20:55.585228 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 04:21:09 crc kubenswrapper[4865]: I0103 04:21:09.969440 4865 generic.go:334] "Generic (PLEG): container finished" podID="aaf191da-bc40-411b-bef2-649b5063978e" containerID="e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb" exitCode=0 Jan 03 04:21:09 crc kubenswrapper[4865]: I0103 04:21:09.969549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerDied","Data":"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb"} Jan 03 04:21:09 crc kubenswrapper[4865]: I0103 04:21:09.970891 4865 scope.go:117] "RemoveContainer" containerID="e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb" Jan 03 04:21:10 crc kubenswrapper[4865]: I0103 04:21:10.976766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerStarted","Data":"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec"} Jan 03 04:21:10 crc kubenswrapper[4865]: I0103 04:21:10.977476 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:21:10 crc kubenswrapper[4865]: I0103 04:21:10.978619 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:21:13 crc kubenswrapper[4865]: I0103 04:21:13.018519 4865 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.428283 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.428865 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" podUID="31d35f74-419c-478e-a1e0-232ad73e7084" containerName="controller-manager" containerID="cri-o://96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e" gracePeriod=30 Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.529838 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.530085 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" podUID="da462e42-c060-4423-8e89-ded4d08f2868" containerName="route-controller-manager" containerID="cri-o://31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244" gracePeriod=30 Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.781045 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.846532 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca\") pod \"31d35f74-419c-478e-a1e0-232ad73e7084\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.846565 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert\") pod \"31d35f74-419c-478e-a1e0-232ad73e7084\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.846586 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl8r2\" (UniqueName: \"kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2\") pod \"31d35f74-419c-478e-a1e0-232ad73e7084\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.846611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles\") pod \"31d35f74-419c-478e-a1e0-232ad73e7084\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.846628 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config\") pod \"31d35f74-419c-478e-a1e0-232ad73e7084\" (UID: \"31d35f74-419c-478e-a1e0-232ad73e7084\") " Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.847535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config" (OuterVolumeSpecName: "config") pod "31d35f74-419c-478e-a1e0-232ad73e7084" (UID: "31d35f74-419c-478e-a1e0-232ad73e7084"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.847969 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca" (OuterVolumeSpecName: "client-ca") pod "31d35f74-419c-478e-a1e0-232ad73e7084" (UID: "31d35f74-419c-478e-a1e0-232ad73e7084"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.851024 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "31d35f74-419c-478e-a1e0-232ad73e7084" (UID: "31d35f74-419c-478e-a1e0-232ad73e7084"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.856165 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31d35f74-419c-478e-a1e0-232ad73e7084" (UID: "31d35f74-419c-478e-a1e0-232ad73e7084"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.857199 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2" (OuterVolumeSpecName: "kube-api-access-sl8r2") pod "31d35f74-419c-478e-a1e0-232ad73e7084" (UID: "31d35f74-419c-478e-a1e0-232ad73e7084"). InnerVolumeSpecName "kube-api-access-sl8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.890167 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.948843 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.948967 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d35f74-419c-478e-a1e0-232ad73e7084-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.949085 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl8r2\" (UniqueName: \"kubernetes.io/projected/31d35f74-419c-478e-a1e0-232ad73e7084-kube-api-access-sl8r2\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.949105 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:15 crc kubenswrapper[4865]: I0103 04:21:15.949122 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d35f74-419c-478e-a1e0-232ad73e7084-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.005798 4865 generic.go:334] "Generic (PLEG): container finished" podID="da462e42-c060-4423-8e89-ded4d08f2868" containerID="31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244" exitCode=0 Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.005847 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.005864 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" event={"ID":"da462e42-c060-4423-8e89-ded4d08f2868","Type":"ContainerDied","Data":"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244"} Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.005899 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m" event={"ID":"da462e42-c060-4423-8e89-ded4d08f2868","Type":"ContainerDied","Data":"2a96d58ff062d4c23f469a3ac60f024b25e483846a61e098e47d03133aff65ee"} Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.005955 4865 scope.go:117] "RemoveContainer" containerID="31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.007887 4865 generic.go:334] "Generic (PLEG): container finished" podID="31d35f74-419c-478e-a1e0-232ad73e7084" containerID="96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e" exitCode=0 Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.007922 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" event={"ID":"31d35f74-419c-478e-a1e0-232ad73e7084","Type":"ContainerDied","Data":"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e"} Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.007946 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" event={"ID":"31d35f74-419c-478e-a1e0-232ad73e7084","Type":"ContainerDied","Data":"43d6e1aa642c32bcfc2adc68114c180e1dd0122afbde4deb78a57a44a62dfeac"} Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.007978 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-742r2" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.027578 4865 scope.go:117] "RemoveContainer" containerID="31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244" Jan 03 04:21:16 crc kubenswrapper[4865]: E0103 04:21:16.030679 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244\": container with ID starting with 31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244 not found: ID does not exist" containerID="31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.030783 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244"} err="failed to get container status \"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244\": rpc error: code = NotFound desc = could not find container \"31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244\": container with ID starting with 31958a541e1629b287efb0b89d766e18dece3e522021d1aba1f88695eb18d244 not found: ID does not exist" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.030830 4865 scope.go:117] "RemoveContainer" containerID="96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.036303 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.041498 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-742r2"] Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.049999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk4f5\" (UniqueName: \"kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5\") pod \"da462e42-c060-4423-8e89-ded4d08f2868\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.050144 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config\") pod \"da462e42-c060-4423-8e89-ded4d08f2868\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.050190 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca\") pod \"da462e42-c060-4423-8e89-ded4d08f2868\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.050373 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert\") pod \"da462e42-c060-4423-8e89-ded4d08f2868\" (UID: \"da462e42-c060-4423-8e89-ded4d08f2868\") " Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.050870 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config" (OuterVolumeSpecName: "config") pod "da462e42-c060-4423-8e89-ded4d08f2868" (UID: "da462e42-c060-4423-8e89-ded4d08f2868"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.052371 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca" (OuterVolumeSpecName: "client-ca") pod "da462e42-c060-4423-8e89-ded4d08f2868" (UID: "da462e42-c060-4423-8e89-ded4d08f2868"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.055878 4865 scope.go:117] "RemoveContainer" containerID="96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.056251 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da462e42-c060-4423-8e89-ded4d08f2868" (UID: "da462e42-c060-4423-8e89-ded4d08f2868"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.056361 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5" (OuterVolumeSpecName: "kube-api-access-jk4f5") pod "da462e42-c060-4423-8e89-ded4d08f2868" (UID: "da462e42-c060-4423-8e89-ded4d08f2868"). InnerVolumeSpecName "kube-api-access-jk4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:16 crc kubenswrapper[4865]: E0103 04:21:16.056426 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e\": container with ID starting with 96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e not found: ID does not exist" containerID="96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.056474 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e"} err="failed to get container status \"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e\": rpc error: code = NotFound desc = could not find container \"96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e\": container with ID starting with 96b91b801e3cd6c559f5b9626989d3ef1392fe22f23ea104306cedd3cc26632e not found: ID does not exist" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.151550 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.151603 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da462e42-c060-4423-8e89-ded4d08f2868-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.151621 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da462e42-c060-4423-8e89-ded4d08f2868-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.151643 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk4f5\" (UniqueName: \"kubernetes.io/projected/da462e42-c060-4423-8e89-ded4d08f2868-kube-api-access-jk4f5\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.349341 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:21:16 crc kubenswrapper[4865]: I0103 04:21:16.356038 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dbw7m"] Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.089452 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:17 crc kubenswrapper[4865]: E0103 04:21:17.089943 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.089972 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 04:21:17 crc kubenswrapper[4865]: E0103 04:21:17.089998 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da462e42-c060-4423-8e89-ded4d08f2868" containerName="route-controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.090015 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="da462e42-c060-4423-8e89-ded4d08f2868" containerName="route-controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: E0103 04:21:17.090049 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d35f74-419c-478e-a1e0-232ad73e7084" containerName="controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.090095 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d35f74-419c-478e-a1e0-232ad73e7084" containerName="controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.090316 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.090350 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="da462e42-c060-4423-8e89-ded4d08f2868" containerName="route-controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.090420 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d35f74-419c-478e-a1e0-232ad73e7084" containerName="controller-manager" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.091329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.094663 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.096070 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.096332 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.096445 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.096553 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.096648 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.097366 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.099911 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.103042 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.103140 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.103773 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.103800 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.103916 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.105072 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.112272 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.119126 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.126119 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.164643 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d35f74-419c-478e-a1e0-232ad73e7084" path="/var/lib/kubelet/pods/31d35f74-419c-478e-a1e0-232ad73e7084/volumes" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.165766 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da462e42-c060-4423-8e89-ded4d08f2868" path="/var/lib/kubelet/pods/da462e42-c060-4423-8e89-ded4d08f2868/volumes" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.265545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.265655 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.265702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.265766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7vn\" (UniqueName: \"kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.265918 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.266090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.266209 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b82r6\" (UniqueName: \"kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.266474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.266578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370233 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370299 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7vn\" (UniqueName: \"kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370338 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370378 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370452 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b82r6\" (UniqueName: \"kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370502 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370552 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.370625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.372800 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.373111 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.373157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.373634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.374003 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.380586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.380612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.401625 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7vn\" (UniqueName: \"kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn\") pod \"controller-manager-856f74b64f-fj9mm\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.403096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b82r6\" (UniqueName: \"kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6\") pod \"route-controller-manager-67cc8d88b-nx4ss\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.433159 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.449157 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.901048 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:17 crc kubenswrapper[4865]: I0103 04:21:17.904094 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:21:17 crc kubenswrapper[4865]: W0103 04:21:17.912808 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1daaee3b_373f_42b1_8115_ed46ffa3d657.slice/crio-1cfd5b195a3c5eb236fdac7f80d63f83dfa6ccb7f55ecd3cbb6596e0af7c8376 WatchSource:0}: Error finding container 1cfd5b195a3c5eb236fdac7f80d63f83dfa6ccb7f55ecd3cbb6596e0af7c8376: Status 404 returned error can't find the container with id 1cfd5b195a3c5eb236fdac7f80d63f83dfa6ccb7f55ecd3cbb6596e0af7c8376 Jan 03 04:21:17 crc kubenswrapper[4865]: W0103 04:21:17.913966 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d98d83f_e44f_4dae_be12_6bf375977bc8.slice/crio-bfd1e4142174e5644781a92b66e85946b07452902be5a56d7d9c5bce1f40574d WatchSource:0}: Error finding container bfd1e4142174e5644781a92b66e85946b07452902be5a56d7d9c5bce1f40574d: Status 404 returned error can't find the container with id bfd1e4142174e5644781a92b66e85946b07452902be5a56d7d9c5bce1f40574d Jan 03 04:21:18 crc kubenswrapper[4865]: I0103 04:21:18.035174 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" event={"ID":"3d98d83f-e44f-4dae-be12-6bf375977bc8","Type":"ContainerStarted","Data":"bfd1e4142174e5644781a92b66e85946b07452902be5a56d7d9c5bce1f40574d"} Jan 03 04:21:18 crc kubenswrapper[4865]: I0103 04:21:18.036981 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" event={"ID":"1daaee3b-373f-42b1-8115-ed46ffa3d657","Type":"ContainerStarted","Data":"1cfd5b195a3c5eb236fdac7f80d63f83dfa6ccb7f55ecd3cbb6596e0af7c8376"} Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.045682 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" event={"ID":"3d98d83f-e44f-4dae-be12-6bf375977bc8","Type":"ContainerStarted","Data":"fbc48c3abd180bc5e85d1fb8a58816b674d09cdd4121df77c150e50f53c99379"} Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.047031 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.048014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" event={"ID":"1daaee3b-373f-42b1-8115-ed46ffa3d657","Type":"ContainerStarted","Data":"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579"} Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.048320 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.054968 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.056179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.070710 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" podStartSLOduration=4.070682967 podStartE2EDuration="4.070682967s" podCreationTimestamp="2026-01-03 04:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:21:19.068687726 +0000 UTC m=+306.185740951" watchObservedRunningTime="2026-01-03 04:21:19.070682967 +0000 UTC m=+306.187736192" Jan 03 04:21:19 crc kubenswrapper[4865]: I0103 04:21:19.091747 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" podStartSLOduration=4.091719932 podStartE2EDuration="4.091719932s" podCreationTimestamp="2026-01-03 04:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:21:19.085990289 +0000 UTC m=+306.203043514" watchObservedRunningTime="2026-01-03 04:21:19.091719932 +0000 UTC m=+306.208773157" Jan 03 04:21:46 crc kubenswrapper[4865]: I0103 04:21:46.947346 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2hcpf"] Jan 03 04:21:46 crc kubenswrapper[4865]: I0103 04:21:46.948847 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:46 crc kubenswrapper[4865]: I0103 04:21:46.981666 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2hcpf"] Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077820 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-certificates\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077845 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-trusted-ca\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-bound-sa-token\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077917 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcqh\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-kube-api-access-5rcqh\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.077957 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.078018 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.078057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-tls\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.100805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.179168 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-trusted-ca\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.179776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-bound-sa-token\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.179825 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcqh\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-kube-api-access-5rcqh\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.179865 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.179955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-tls\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.181629 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.181674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-certificates\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.181451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-trusted-ca\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.182915 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.183294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-certificates\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.192178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.194987 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-registry-tls\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.213510 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-bound-sa-token\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.217602 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcqh\" (UniqueName: \"kubernetes.io/projected/e0209965-3df2-4eb2-9a55-48ce3bb7b3f6-kube-api-access-5rcqh\") pod \"image-registry-66df7c8f76-2hcpf\" (UID: \"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.270890 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:47 crc kubenswrapper[4865]: I0103 04:21:47.744970 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2hcpf"] Jan 03 04:21:48 crc kubenswrapper[4865]: I0103 04:21:48.232343 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" event={"ID":"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6","Type":"ContainerStarted","Data":"ee5b04d795599824550c6baf4812bb9c3603fffae3b06f55489bbb1eb13a9845"} Jan 03 04:21:48 crc kubenswrapper[4865]: I0103 04:21:48.232412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" event={"ID":"e0209965-3df2-4eb2-9a55-48ce3bb7b3f6","Type":"ContainerStarted","Data":"6610f2f5bdd8fc288b39674c731f1ccb539d9d4bb130f8eba8c0dd532282afe6"} Jan 03 04:21:48 crc kubenswrapper[4865]: I0103 04:21:48.232517 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:21:48 crc kubenswrapper[4865]: I0103 04:21:48.277926 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" podStartSLOduration=2.277887084 podStartE2EDuration="2.277887084s" podCreationTimestamp="2026-01-03 04:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:21:48.258880968 +0000 UTC m=+335.375934163" watchObservedRunningTime="2026-01-03 04:21:48.277887084 +0000 UTC m=+335.394940329" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.433939 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.434667 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" podUID="1daaee3b-373f-42b1-8115-ed46ffa3d657" containerName="controller-manager" containerID="cri-o://efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579" gracePeriod=30 Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.886144 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.921281 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn7vn\" (UniqueName: \"kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn\") pod \"1daaee3b-373f-42b1-8115-ed46ffa3d657\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.921330 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles\") pod \"1daaee3b-373f-42b1-8115-ed46ffa3d657\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.921407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config\") pod \"1daaee3b-373f-42b1-8115-ed46ffa3d657\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.921443 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert\") pod \"1daaee3b-373f-42b1-8115-ed46ffa3d657\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.921470 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca\") pod \"1daaee3b-373f-42b1-8115-ed46ffa3d657\" (UID: \"1daaee3b-373f-42b1-8115-ed46ffa3d657\") " Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.922273 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca" (OuterVolumeSpecName: "client-ca") pod "1daaee3b-373f-42b1-8115-ed46ffa3d657" (UID: "1daaee3b-373f-42b1-8115-ed46ffa3d657"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.922549 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1daaee3b-373f-42b1-8115-ed46ffa3d657" (UID: "1daaee3b-373f-42b1-8115-ed46ffa3d657"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.922887 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config" (OuterVolumeSpecName: "config") pod "1daaee3b-373f-42b1-8115-ed46ffa3d657" (UID: "1daaee3b-373f-42b1-8115-ed46ffa3d657"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.927355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn" (OuterVolumeSpecName: "kube-api-access-kn7vn") pod "1daaee3b-373f-42b1-8115-ed46ffa3d657" (UID: "1daaee3b-373f-42b1-8115-ed46ffa3d657"). InnerVolumeSpecName "kube-api-access-kn7vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:55 crc kubenswrapper[4865]: I0103 04:21:55.928517 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1daaee3b-373f-42b1-8115-ed46ffa3d657" (UID: "1daaee3b-373f-42b1-8115-ed46ffa3d657"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.022456 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn7vn\" (UniqueName: \"kubernetes.io/projected/1daaee3b-373f-42b1-8115-ed46ffa3d657-kube-api-access-kn7vn\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.022485 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.022496 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.022504 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1daaee3b-373f-42b1-8115-ed46ffa3d657-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.022512 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1daaee3b-373f-42b1-8115-ed46ffa3d657-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.290677 4865 generic.go:334] "Generic (PLEG): container finished" podID="1daaee3b-373f-42b1-8115-ed46ffa3d657" containerID="efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579" exitCode=0 Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.290735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" event={"ID":"1daaee3b-373f-42b1-8115-ed46ffa3d657","Type":"ContainerDied","Data":"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579"} Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.291026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" event={"ID":"1daaee3b-373f-42b1-8115-ed46ffa3d657","Type":"ContainerDied","Data":"1cfd5b195a3c5eb236fdac7f80d63f83dfa6ccb7f55ecd3cbb6596e0af7c8376"} Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.291060 4865 scope.go:117] "RemoveContainer" containerID="efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.290776 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856f74b64f-fj9mm" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.319706 4865 scope.go:117] "RemoveContainer" containerID="efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579" Jan 03 04:21:56 crc kubenswrapper[4865]: E0103 04:21:56.321056 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579\": container with ID starting with efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579 not found: ID does not exist" containerID="efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.321108 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579"} err="failed to get container status \"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579\": rpc error: code = NotFound desc = could not find container \"efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579\": container with ID starting with efef6b66224a51e172638d8b9bcdd2ceb577eb744a05cf38b94512ac4cc83579 not found: ID does not exist" Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.340291 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:56 crc kubenswrapper[4865]: I0103 04:21:56.351769 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856f74b64f-fj9mm"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.116259 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84c694c977-x97xf"] Jan 03 04:21:57 crc kubenswrapper[4865]: E0103 04:21:57.116828 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1daaee3b-373f-42b1-8115-ed46ffa3d657" containerName="controller-manager" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.116845 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1daaee3b-373f-42b1-8115-ed46ffa3d657" containerName="controller-manager" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.116957 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1daaee3b-373f-42b1-8115-ed46ffa3d657" containerName="controller-manager" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.117452 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.119765 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.120151 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.120550 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.120563 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.121798 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.126136 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.131863 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c694c977-x97xf"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.136562 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.138791 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-config\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.138853 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c975965-c37d-4630-943a-08b57a9e0693-serving-cert\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.138887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-client-ca\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.138952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmdl\" (UniqueName: \"kubernetes.io/projected/0c975965-c37d-4630-943a-08b57a9e0693-kube-api-access-9jmdl\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.138978 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-proxy-ca-bundles\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.163720 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1daaee3b-373f-42b1-8115-ed46ffa3d657" path="/var/lib/kubelet/pods/1daaee3b-373f-42b1-8115-ed46ffa3d657/volumes" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.240260 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-config\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.240687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c975965-c37d-4630-943a-08b57a9e0693-serving-cert\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.240887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-client-ca\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.241159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmdl\" (UniqueName: \"kubernetes.io/projected/0c975965-c37d-4630-943a-08b57a9e0693-kube-api-access-9jmdl\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.242878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-proxy-ca-bundles\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.242462 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-client-ca\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.241654 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-config\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.246312 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c975965-c37d-4630-943a-08b57a9e0693-proxy-ca-bundles\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.251403 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c975965-c37d-4630-943a-08b57a9e0693-serving-cert\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.269140 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmdl\" (UniqueName: \"kubernetes.io/projected/0c975965-c37d-4630-943a-08b57a9e0693-kube-api-access-9jmdl\") pod \"controller-manager-84c694c977-x97xf\" (UID: \"0c975965-c37d-4630-943a-08b57a9e0693\") " pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.329364 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.329848 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fq86b" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="registry-server" containerID="cri-o://47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a" gracePeriod=30 Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.339728 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.339987 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4c5k" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="registry-server" containerID="cri-o://1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5" gracePeriod=30 Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.361639 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.362169 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" containerID="cri-o://10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec" gracePeriod=30 Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.372022 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.372411 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8vpwf" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="registry-server" containerID="cri-o://2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc" gracePeriod=30 Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.399143 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86vxz"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.399991 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.403007 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.403202 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zt55" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="registry-server" containerID="cri-o://8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f" gracePeriod=30 Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.406028 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86vxz"] Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.448763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.457964 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4kls\" (UniqueName: \"kubernetes.io/projected/e0d2175d-f167-4a1f-a14e-df5e69557228-kube-api-access-j4kls\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.458012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.458089 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.559330 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4kls\" (UniqueName: \"kubernetes.io/projected/e0d2175d-f167-4a1f-a14e-df5e69557228-kube-api-access-j4kls\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.559638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.559731 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.560873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.569913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0d2175d-f167-4a1f-a14e-df5e69557228-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.575693 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4kls\" (UniqueName: \"kubernetes.io/projected/e0d2175d-f167-4a1f-a14e-df5e69557228-kube-api-access-j4kls\") pod \"marketplace-operator-79b997595-86vxz\" (UID: \"e0d2175d-f167-4a1f-a14e-df5e69557228\") " pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.719136 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.788938 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.864743 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities\") pod \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.864791 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content\") pod \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.864821 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncdbc\" (UniqueName: \"kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc\") pod \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\" (UID: \"62322f6e-727e-4261-bdaa-9b7e91b8c1f7\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.866331 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities" (OuterVolumeSpecName: "utilities") pod "62322f6e-727e-4261-bdaa-9b7e91b8c1f7" (UID: "62322f6e-727e-4261-bdaa-9b7e91b8c1f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.877733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc" (OuterVolumeSpecName: "kube-api-access-ncdbc") pod "62322f6e-727e-4261-bdaa-9b7e91b8c1f7" (UID: "62322f6e-727e-4261-bdaa-9b7e91b8c1f7"). InnerVolumeSpecName "kube-api-access-ncdbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.885681 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.892072 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.894972 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.897799 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.957149 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62322f6e-727e-4261-bdaa-9b7e91b8c1f7" (UID: "62322f6e-727e-4261-bdaa-9b7e91b8c1f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966145 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9nrq\" (UniqueName: \"kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq\") pod \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966194 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities\") pod \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966213 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lsz9\" (UniqueName: \"kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9\") pod \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966252 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics\") pod \"aaf191da-bc40-411b-bef2-649b5063978e\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966268 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities\") pod \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966295 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities\") pod \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966328 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content\") pod \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\" (UID: \"2d88ce56-9fd6-4b25-a5b5-8353d633ac48\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966350 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sfpz\" (UniqueName: \"kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz\") pod \"aaf191da-bc40-411b-bef2-649b5063978e\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966372 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca\") pod \"aaf191da-bc40-411b-bef2-649b5063978e\" (UID: \"aaf191da-bc40-411b-bef2-649b5063978e\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content\") pod \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\" (UID: \"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.966725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55d2j\" (UniqueName: \"kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j\") pod \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.967271 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aaf191da-bc40-411b-bef2-649b5063978e" (UID: "aaf191da-bc40-411b-bef2-649b5063978e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.967436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities" (OuterVolumeSpecName: "utilities") pod "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" (UID: "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969226 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9" (OuterVolumeSpecName: "kube-api-access-7lsz9") pod "2d88ce56-9fd6-4b25-a5b5-8353d633ac48" (UID: "2d88ce56-9fd6-4b25-a5b5-8353d633ac48"). InnerVolumeSpecName "kube-api-access-7lsz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content\") pod \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\" (UID: \"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8\") " Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969828 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities" (OuterVolumeSpecName: "utilities") pod "2d88ce56-9fd6-4b25-a5b5-8353d633ac48" (UID: "2d88ce56-9fd6-4b25-a5b5-8353d633ac48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969855 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969905 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lsz9\" (UniqueName: \"kubernetes.io/projected/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-kube-api-access-7lsz9\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969919 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969932 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969946 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969932 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq" (OuterVolumeSpecName: "kube-api-access-k9nrq") pod "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" (UID: "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b"). InnerVolumeSpecName "kube-api-access-k9nrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.969957 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncdbc\" (UniqueName: \"kubernetes.io/projected/62322f6e-727e-4261-bdaa-9b7e91b8c1f7-kube-api-access-ncdbc\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.970219 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j" (OuterVolumeSpecName: "kube-api-access-55d2j") pod "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" (UID: "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8"). InnerVolumeSpecName "kube-api-access-55d2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.970875 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz" (OuterVolumeSpecName: "kube-api-access-2sfpz") pod "aaf191da-bc40-411b-bef2-649b5063978e" (UID: "aaf191da-bc40-411b-bef2-649b5063978e"). InnerVolumeSpecName "kube-api-access-2sfpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.974184 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities" (OuterVolumeSpecName: "utilities") pod "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" (UID: "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.987257 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aaf191da-bc40-411b-bef2-649b5063978e" (UID: "aaf191da-bc40-411b-bef2-649b5063978e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:21:57 crc kubenswrapper[4865]: I0103 04:21:57.999294 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" (UID: "a226c2c4-4ed7-4cfa-9fa2-b65151cef65b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.003930 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c694c977-x97xf"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.026569 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" (UID: "d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070599 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070631 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55d2j\" (UniqueName: \"kubernetes.io/projected/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-kube-api-access-55d2j\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070641 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070650 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9nrq\" (UniqueName: \"kubernetes.io/projected/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-kube-api-access-k9nrq\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070661 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aaf191da-bc40-411b-bef2-649b5063978e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070670 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070678 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.070686 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sfpz\" (UniqueName: \"kubernetes.io/projected/aaf191da-bc40-411b-bef2-649b5063978e-kube-api-access-2sfpz\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.088980 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d88ce56-9fd6-4b25-a5b5-8353d633ac48" (UID: "2d88ce56-9fd6-4b25-a5b5-8353d633ac48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.174352 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d88ce56-9fd6-4b25-a5b5-8353d633ac48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.176110 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86vxz"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.314542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" event={"ID":"e0d2175d-f167-4a1f-a14e-df5e69557228","Type":"ContainerStarted","Data":"b9e631e8272c441d41809a2ce32017fea13c4f1b5405a3b0863170491e5c5321"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.315215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.315544 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" event={"ID":"e0d2175d-f167-4a1f-a14e-df5e69557228","Type":"ContainerStarted","Data":"88827f1e6bab7ac0b2cc342f16dd13fe4fbc868f016df9664dc5c5ab43a46b47"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.316150 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" event={"ID":"0c975965-c37d-4630-943a-08b57a9e0693","Type":"ContainerStarted","Data":"94cb2f1e392db3349b51ceb1391a5f9a0f12d08e35562c37994b8b59e13aec94"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.316265 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" event={"ID":"0c975965-c37d-4630-943a-08b57a9e0693","Type":"ContainerStarted","Data":"4507bd597168ac61cdebae0231eee19fa59182e2f1a45e6ce9dac0fff3f4d4ac"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.317625 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-86vxz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.317682 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" podUID="e0d2175d-f167-4a1f-a14e-df5e69557228" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.317852 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.320452 4865 generic.go:334] "Generic (PLEG): container finished" podID="aaf191da-bc40-411b-bef2-649b5063978e" containerID="10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec" exitCode=0 Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.320511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerDied","Data":"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.320536 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" event={"ID":"aaf191da-bc40-411b-bef2-649b5063978e","Type":"ContainerDied","Data":"3582e3e8832df54132bc2fccee52a77d77978eae6c090d7ef064eca1e20e8cac"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.320554 4865 scope.go:117] "RemoveContainer" containerID="10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.320673 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8cgc7" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.328537 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerID="8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f" exitCode=0 Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.328600 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zt55" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.328629 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerDied","Data":"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.328664 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zt55" event={"ID":"2d88ce56-9fd6-4b25-a5b5-8353d633ac48","Type":"ContainerDied","Data":"5cf8fcec4b334eb36828020f1d33f00e5166f8d8b025207cfb31d3920d10e449"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.335694 4865 generic.go:334] "Generic (PLEG): container finished" podID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerID="47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a" exitCode=0 Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.335774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerDied","Data":"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.335805 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq86b" event={"ID":"d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8","Type":"ContainerDied","Data":"d278224a369678296388b0c89b2e81655971082342a21a119471dd182ae357d4"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.335891 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq86b" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.336739 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" podStartSLOduration=1.336723324 podStartE2EDuration="1.336723324s" podCreationTimestamp="2026-01-03 04:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:21:58.333400165 +0000 UTC m=+345.450453350" watchObservedRunningTime="2026-01-03 04:21:58.336723324 +0000 UTC m=+345.453776499" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.337674 4865 scope.go:117] "RemoveContainer" containerID="e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.338114 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.339834 4865 generic.go:334] "Generic (PLEG): container finished" podID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerID="2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc" exitCode=0 Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.339910 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerDied","Data":"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.339932 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vpwf" event={"ID":"a226c2c4-4ed7-4cfa-9fa2-b65151cef65b","Type":"ContainerDied","Data":"e5a3f91154408afa1ef08b8e5765fcb2b3d61ac6b2e07396edc6191c10166db9"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.340108 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vpwf" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.343883 4865 generic.go:334] "Generic (PLEG): container finished" podID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerID="1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5" exitCode=0 Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.343914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerDied","Data":"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.343931 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4c5k" event={"ID":"62322f6e-727e-4261-bdaa-9b7e91b8c1f7","Type":"ContainerDied","Data":"5df109944ab20c8ac45a4d97d1e7d134266a4a3ee51cf5c052993bb6e195ea42"} Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.344162 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4c5k" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.364090 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84c694c977-x97xf" podStartSLOduration=3.364074563 podStartE2EDuration="3.364074563s" podCreationTimestamp="2026-01-03 04:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:21:58.362623725 +0000 UTC m=+345.479676910" watchObservedRunningTime="2026-01-03 04:21:58.364074563 +0000 UTC m=+345.481127748" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.382836 4865 scope.go:117] "RemoveContainer" containerID="10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.383950 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec\": container with ID starting with 10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec not found: ID does not exist" containerID="10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.384066 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec"} err="failed to get container status \"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec\": rpc error: code = NotFound desc = could not find container \"10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec\": container with ID starting with 10995591b759e844b6b39e78bc59b0aa55afbe6ce150827021b3996a082a9bec not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.384158 4865 scope.go:117] "RemoveContainer" containerID="e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.384476 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb\": container with ID starting with e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb not found: ID does not exist" containerID="e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.384517 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb"} err="failed to get container status \"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb\": rpc error: code = NotFound desc = could not find container \"e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb\": container with ID starting with e6da0691e8b9399f34564c6137c383108427f7ddf1189652a3ec3afe0bdfd9eb not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.384541 4865 scope.go:117] "RemoveContainer" containerID="8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.390730 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.397231 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8cgc7"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.413067 4865 scope.go:117] "RemoveContainer" containerID="789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.459071 4865 scope.go:117] "RemoveContainer" containerID="1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.459802 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.471445 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fq86b"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.487506 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.500644 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4c5k"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.504415 4865 scope.go:117] "RemoveContainer" containerID="8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.506477 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f\": container with ID starting with 8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f not found: ID does not exist" containerID="8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.506512 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f"} err="failed to get container status \"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f\": rpc error: code = NotFound desc = could not find container \"8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f\": container with ID starting with 8a8ebfd1b7efcc179957554e7abd0c5a62b5e8ee28ab7d303a6e59a10985360f not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.506535 4865 scope.go:117] "RemoveContainer" containerID="789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.508695 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.510387 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39\": container with ID starting with 789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39 not found: ID does not exist" containerID="789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.510412 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39"} err="failed to get container status \"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39\": rpc error: code = NotFound desc = could not find container \"789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39\": container with ID starting with 789d1491e771f849f49036d12ffeb079119d8ce8d9e691cabd8b5ea8be48ca39 not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.510427 4865 scope.go:117] "RemoveContainer" containerID="1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.510841 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b\": container with ID starting with 1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b not found: ID does not exist" containerID="1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.510860 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b"} err="failed to get container status \"1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b\": rpc error: code = NotFound desc = could not find container \"1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b\": container with ID starting with 1e0e3cabdbd6a7dae69c0899febcebb49c88cb80617671f8a4c39ff2d3a2746b not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.510873 4865 scope.go:117] "RemoveContainer" containerID="47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.514506 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vpwf"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.521493 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.522577 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zt55"] Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.525062 4865 scope.go:117] "RemoveContainer" containerID="755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.542779 4865 scope.go:117] "RemoveContainer" containerID="5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.558797 4865 scope.go:117] "RemoveContainer" containerID="47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.559438 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a\": container with ID starting with 47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a not found: ID does not exist" containerID="47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.559518 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a"} err="failed to get container status \"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a\": rpc error: code = NotFound desc = could not find container \"47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a\": container with ID starting with 47db8150de42ff9be9ce5dcf2eb2255c8d0eae94379cd1612d394a89966a7f0a not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.559571 4865 scope.go:117] "RemoveContainer" containerID="755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.559966 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238\": container with ID starting with 755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238 not found: ID does not exist" containerID="755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.560011 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238"} err="failed to get container status \"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238\": rpc error: code = NotFound desc = could not find container \"755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238\": container with ID starting with 755bf8a7e534aad345a0de5ece4ebba2528a573b13caa1b80c48c74ab01c9238 not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.560047 4865 scope.go:117] "RemoveContainer" containerID="5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.560660 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2\": container with ID starting with 5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2 not found: ID does not exist" containerID="5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.560779 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2"} err="failed to get container status \"5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2\": rpc error: code = NotFound desc = could not find container \"5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2\": container with ID starting with 5446bbf206f66be94b5ca681fd9439411f941645f01f528a94fc41d3991467e2 not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.560801 4865 scope.go:117] "RemoveContainer" containerID="2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.579572 4865 scope.go:117] "RemoveContainer" containerID="b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.595498 4865 scope.go:117] "RemoveContainer" containerID="ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.615698 4865 scope.go:117] "RemoveContainer" containerID="2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.616229 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc\": container with ID starting with 2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc not found: ID does not exist" containerID="2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.616271 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc"} err="failed to get container status \"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc\": rpc error: code = NotFound desc = could not find container \"2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc\": container with ID starting with 2e3871d5a7077274297fb9e8c01b295d7e0768f72393cb1567ad9b9ecac144cc not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.616292 4865 scope.go:117] "RemoveContainer" containerID="b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.616856 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded\": container with ID starting with b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded not found: ID does not exist" containerID="b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.616931 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded"} err="failed to get container status \"b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded\": rpc error: code = NotFound desc = could not find container \"b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded\": container with ID starting with b3d1ab43e4ab945fd89462785e717bc972f0dca1473b7e1b6fb283929f9f2ded not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.616985 4865 scope.go:117] "RemoveContainer" containerID="ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.617585 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b\": container with ID starting with ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b not found: ID does not exist" containerID="ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.617620 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b"} err="failed to get container status \"ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b\": rpc error: code = NotFound desc = could not find container \"ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b\": container with ID starting with ff1a628e77fa7fd8fcbf777495473f276851bdf622298b3293bc1ee97523299b not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.617650 4865 scope.go:117] "RemoveContainer" containerID="1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.630689 4865 scope.go:117] "RemoveContainer" containerID="8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.649611 4865 scope.go:117] "RemoveContainer" containerID="11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.665405 4865 scope.go:117] "RemoveContainer" containerID="1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.665776 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5\": container with ID starting with 1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5 not found: ID does not exist" containerID="1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.665802 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5"} err="failed to get container status \"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5\": rpc error: code = NotFound desc = could not find container \"1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5\": container with ID starting with 1d3fba1c1b728d1adfe73ae10b6f060aaafef035a4de137bd2468e2448d46eb5 not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.665818 4865 scope.go:117] "RemoveContainer" containerID="8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.666426 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef\": container with ID starting with 8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef not found: ID does not exist" containerID="8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.666464 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef"} err="failed to get container status \"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef\": rpc error: code = NotFound desc = could not find container \"8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef\": container with ID starting with 8bc6f9cd680b7ed94865785f72a816b609c0fb9dbbcfb884c9d2ff0974ffdcef not found: ID does not exist" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.666490 4865 scope.go:117] "RemoveContainer" containerID="11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0" Jan 03 04:21:58 crc kubenswrapper[4865]: E0103 04:21:58.666824 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0\": container with ID starting with 11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0 not found: ID does not exist" containerID="11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0" Jan 03 04:21:58 crc kubenswrapper[4865]: I0103 04:21:58.666841 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0"} err="failed to get container status \"11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0\": rpc error: code = NotFound desc = could not find container \"11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0\": container with ID starting with 11dfbaca6b58c0bf1305a59f18014336c7c778615dcc4a1e6ac062b60752bbe0 not found: ID does not exist" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.163436 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" path="/var/lib/kubelet/pods/2d88ce56-9fd6-4b25-a5b5-8353d633ac48/volumes" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.164055 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" path="/var/lib/kubelet/pods/62322f6e-727e-4261-bdaa-9b7e91b8c1f7/volumes" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.164628 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" path="/var/lib/kubelet/pods/a226c2c4-4ed7-4cfa-9fa2-b65151cef65b/volumes" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.165617 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf191da-bc40-411b-bef2-649b5063978e" path="/var/lib/kubelet/pods/aaf191da-bc40-411b-bef2-649b5063978e/volumes" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.166059 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" path="/var/lib/kubelet/pods/d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8/volumes" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.362319 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-86vxz" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.738632 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgfcx"] Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.738964 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.738992 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739012 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739025 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739038 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739052 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739069 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739080 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739096 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739108 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739123 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739135 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739159 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739172 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739191 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739203 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739215 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739227 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="extract-utilities" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739242 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739254 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739269 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739280 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739303 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739315 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739330 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739342 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: E0103 04:21:59.739359 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739371 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="extract-content" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739547 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d88ce56-9fd6-4b25-a5b5-8353d633ac48" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739568 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="62322f6e-727e-4261-bdaa-9b7e91b8c1f7" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739586 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6bbcfc2-cdf7-4d1d-b3c0-4d0598aba1c8" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739606 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a226c2c4-4ed7-4cfa-9fa2-b65151cef65b" containerName="registry-server" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739625 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.739641 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf191da-bc40-411b-bef2-649b5063978e" containerName="marketplace-operator" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.740814 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.745677 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.764537 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgfcx"] Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.801838 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-catalog-content\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.801919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhbd\" (UniqueName: \"kubernetes.io/projected/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-kube-api-access-2jhbd\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.801983 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-utilities\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.903209 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-catalog-content\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.903265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jhbd\" (UniqueName: \"kubernetes.io/projected/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-kube-api-access-2jhbd\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.903296 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-utilities\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.903690 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-catalog-content\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.903742 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-utilities\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.926079 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jhbd\" (UniqueName: \"kubernetes.io/projected/e8a3dce4-c06d-4329-9c3c-71813d2c44d3-kube-api-access-2jhbd\") pod \"redhat-marketplace-mgfcx\" (UID: \"e8a3dce4-c06d-4329-9c3c-71813d2c44d3\") " pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.942617 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnlsn"] Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.950760 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.951244 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnlsn"] Jan 03 04:21:59 crc kubenswrapper[4865]: I0103 04:21:59.954710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.004277 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjcwp\" (UniqueName: \"kubernetes.io/projected/3014cf2d-2752-436d-8878-4883e654999a-kube-api-access-fjcwp\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.004354 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-utilities\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.004392 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-catalog-content\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.058817 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.105758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-utilities\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.105799 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-catalog-content\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.105848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjcwp\" (UniqueName: \"kubernetes.io/projected/3014cf2d-2752-436d-8878-4883e654999a-kube-api-access-fjcwp\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.106521 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-utilities\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.106716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3014cf2d-2752-436d-8878-4883e654999a-catalog-content\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.133997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjcwp\" (UniqueName: \"kubernetes.io/projected/3014cf2d-2752-436d-8878-4883e654999a-kube-api-access-fjcwp\") pod \"certified-operators-tnlsn\" (UID: \"3014cf2d-2752-436d-8878-4883e654999a\") " pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.253623 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgfcx"] Jan 03 04:22:00 crc kubenswrapper[4865]: W0103 04:22:00.260591 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8a3dce4_c06d_4329_9c3c_71813d2c44d3.slice/crio-29e85e3e36683313ad150371dccbb99ad5267a696e50cf7f2e1923030a3c935a WatchSource:0}: Error finding container 29e85e3e36683313ad150371dccbb99ad5267a696e50cf7f2e1923030a3c935a: Status 404 returned error can't find the container with id 29e85e3e36683313ad150371dccbb99ad5267a696e50cf7f2e1923030a3c935a Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.284755 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.367619 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgfcx" event={"ID":"e8a3dce4-c06d-4329-9c3c-71813d2c44d3","Type":"ContainerStarted","Data":"29e85e3e36683313ad150371dccbb99ad5267a696e50cf7f2e1923030a3c935a"} Jan 03 04:22:00 crc kubenswrapper[4865]: I0103 04:22:00.503171 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnlsn"] Jan 03 04:22:00 crc kubenswrapper[4865]: W0103 04:22:00.532583 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3014cf2d_2752_436d_8878_4883e654999a.slice/crio-f4273c341ae84a7869e2deb40ad7f2809fdb45ac9d906953042bd7ec52ee5e18 WatchSource:0}: Error finding container f4273c341ae84a7869e2deb40ad7f2809fdb45ac9d906953042bd7ec52ee5e18: Status 404 returned error can't find the container with id f4273c341ae84a7869e2deb40ad7f2809fdb45ac9d906953042bd7ec52ee5e18 Jan 03 04:22:01 crc kubenswrapper[4865]: I0103 04:22:01.376625 4865 generic.go:334] "Generic (PLEG): container finished" podID="e8a3dce4-c06d-4329-9c3c-71813d2c44d3" containerID="cbfeb6a7939f525a5a15547ba9656f97068ec431f79f0d9e3fc8dcbe306ec677" exitCode=0 Jan 03 04:22:01 crc kubenswrapper[4865]: I0103 04:22:01.376692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgfcx" event={"ID":"e8a3dce4-c06d-4329-9c3c-71813d2c44d3","Type":"ContainerDied","Data":"cbfeb6a7939f525a5a15547ba9656f97068ec431f79f0d9e3fc8dcbe306ec677"} Jan 03 04:22:01 crc kubenswrapper[4865]: I0103 04:22:01.379622 4865 generic.go:334] "Generic (PLEG): container finished" podID="3014cf2d-2752-436d-8878-4883e654999a" containerID="4c52105874c03ed756f98e9c6804a72a529076428eae801c49c5bf26bc596c1b" exitCode=0 Jan 03 04:22:01 crc kubenswrapper[4865]: I0103 04:22:01.379710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnlsn" event={"ID":"3014cf2d-2752-436d-8878-4883e654999a","Type":"ContainerDied","Data":"4c52105874c03ed756f98e9c6804a72a529076428eae801c49c5bf26bc596c1b"} Jan 03 04:22:01 crc kubenswrapper[4865]: I0103 04:22:01.379758 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnlsn" event={"ID":"3014cf2d-2752-436d-8878-4883e654999a","Type":"ContainerStarted","Data":"f4273c341ae84a7869e2deb40ad7f2809fdb45ac9d906953042bd7ec52ee5e18"} Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.139677 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5gld9"] Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.140833 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.143245 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.161011 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gld9"] Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.241684 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-utilities\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.241773 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-catalog-content\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.241842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvqwp\" (UniqueName: \"kubernetes.io/projected/44a95df7-b26c-417b-8756-88655a7b34d3-kube-api-access-zvqwp\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.333191 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2f87v"] Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.334516 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.336759 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.342581 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-utilities\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.342603 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f87v"] Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.342643 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-catalog-content\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.342686 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvqwp\" (UniqueName: \"kubernetes.io/projected/44a95df7-b26c-417b-8756-88655a7b34d3-kube-api-access-zvqwp\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.342993 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-utilities\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.343196 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a95df7-b26c-417b-8756-88655a7b34d3-catalog-content\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.363851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvqwp\" (UniqueName: \"kubernetes.io/projected/44a95df7-b26c-417b-8756-88655a7b34d3-kube-api-access-zvqwp\") pod \"community-operators-5gld9\" (UID: \"44a95df7-b26c-417b-8756-88655a7b34d3\") " pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.387633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgfcx" event={"ID":"e8a3dce4-c06d-4329-9c3c-71813d2c44d3","Type":"ContainerStarted","Data":"b7007898d90cda9832ffe9785bbb6ab72acc04907480fe370fc82bb6a84624cc"} Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.444553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mfk\" (UniqueName: \"kubernetes.io/projected/61cc1eff-fb8d-4b15-bc9a-9be54c149447-kube-api-access-p6mfk\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.444604 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-catalog-content\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.444639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-utilities\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.464183 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.545111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mfk\" (UniqueName: \"kubernetes.io/projected/61cc1eff-fb8d-4b15-bc9a-9be54c149447-kube-api-access-p6mfk\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.545165 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-catalog-content\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.545210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-utilities\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.545824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-utilities\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.546513 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61cc1eff-fb8d-4b15-bc9a-9be54c149447-catalog-content\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.563020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mfk\" (UniqueName: \"kubernetes.io/projected/61cc1eff-fb8d-4b15-bc9a-9be54c149447-kube-api-access-p6mfk\") pod \"redhat-operators-2f87v\" (UID: \"61cc1eff-fb8d-4b15-bc9a-9be54c149447\") " pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.649775 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:02 crc kubenswrapper[4865]: I0103 04:22:02.896667 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gld9"] Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.114596 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2f87v"] Jan 03 04:22:03 crc kubenswrapper[4865]: W0103 04:22:03.119188 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cc1eff_fb8d_4b15_bc9a_9be54c149447.slice/crio-d2acf31b2b87594ae6ee92d426c18e9a8dd9b32bacd8e7e88629fa7be1d3eb12 WatchSource:0}: Error finding container d2acf31b2b87594ae6ee92d426c18e9a8dd9b32bacd8e7e88629fa7be1d3eb12: Status 404 returned error can't find the container with id d2acf31b2b87594ae6ee92d426c18e9a8dd9b32bacd8e7e88629fa7be1d3eb12 Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.396806 4865 generic.go:334] "Generic (PLEG): container finished" podID="3014cf2d-2752-436d-8878-4883e654999a" containerID="848cc2675f6b235738c929719ecab884f89a0e0bbf3b02fd63936c6de011ac63" exitCode=0 Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.396904 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnlsn" event={"ID":"3014cf2d-2752-436d-8878-4883e654999a","Type":"ContainerDied","Data":"848cc2675f6b235738c929719ecab884f89a0e0bbf3b02fd63936c6de011ac63"} Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.398365 4865 generic.go:334] "Generic (PLEG): container finished" podID="44a95df7-b26c-417b-8756-88655a7b34d3" containerID="cc9718c63072a5d3879d8b395a80c3e468cc48e802955253e66180c5518439aa" exitCode=0 Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.398458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gld9" event={"ID":"44a95df7-b26c-417b-8756-88655a7b34d3","Type":"ContainerDied","Data":"cc9718c63072a5d3879d8b395a80c3e468cc48e802955253e66180c5518439aa"} Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.398506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gld9" event={"ID":"44a95df7-b26c-417b-8756-88655a7b34d3","Type":"ContainerStarted","Data":"fe8c8c3af687544256c87b7079c3d6215714f0aedaca04c4a6bdd64712d6bc13"} Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.400134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f87v" event={"ID":"61cc1eff-fb8d-4b15-bc9a-9be54c149447","Type":"ContainerDied","Data":"92d658985c4e9a83165a150ca108845cc6c79edb99e21853d9a07ca040f59ad5"} Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.399942 4865 generic.go:334] "Generic (PLEG): container finished" podID="61cc1eff-fb8d-4b15-bc9a-9be54c149447" containerID="92d658985c4e9a83165a150ca108845cc6c79edb99e21853d9a07ca040f59ad5" exitCode=0 Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.401052 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f87v" event={"ID":"61cc1eff-fb8d-4b15-bc9a-9be54c149447","Type":"ContainerStarted","Data":"d2acf31b2b87594ae6ee92d426c18e9a8dd9b32bacd8e7e88629fa7be1d3eb12"} Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.404507 4865 generic.go:334] "Generic (PLEG): container finished" podID="e8a3dce4-c06d-4329-9c3c-71813d2c44d3" containerID="b7007898d90cda9832ffe9785bbb6ab72acc04907480fe370fc82bb6a84624cc" exitCode=0 Jan 03 04:22:03 crc kubenswrapper[4865]: I0103 04:22:03.404560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgfcx" event={"ID":"e8a3dce4-c06d-4329-9c3c-71813d2c44d3","Type":"ContainerDied","Data":"b7007898d90cda9832ffe9785bbb6ab72acc04907480fe370fc82bb6a84624cc"} Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.415148 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnlsn" event={"ID":"3014cf2d-2752-436d-8878-4883e654999a","Type":"ContainerStarted","Data":"0d883d24bab6c0990deaae0f8f3c816c7a26993c50b8c5063e5ab00da43a61a0"} Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.417542 4865 generic.go:334] "Generic (PLEG): container finished" podID="44a95df7-b26c-417b-8756-88655a7b34d3" containerID="94b2fc4f0f12270d9cdcb89aa5a940549ea9a7c87a8c92a25802daf7dc401124" exitCode=0 Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.417615 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gld9" event={"ID":"44a95df7-b26c-417b-8756-88655a7b34d3","Type":"ContainerDied","Data":"94b2fc4f0f12270d9cdcb89aa5a940549ea9a7c87a8c92a25802daf7dc401124"} Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.422592 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f87v" event={"ID":"61cc1eff-fb8d-4b15-bc9a-9be54c149447","Type":"ContainerStarted","Data":"c07f2e2568d5d6d5bfd1ba878e9cb7594d9f73b6f394e8a68b6c84e4e59cc45d"} Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.424935 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgfcx" event={"ID":"e8a3dce4-c06d-4329-9c3c-71813d2c44d3","Type":"ContainerStarted","Data":"d7ea5cd8cf830fcd18808733fc27cf87bd89a30801a1fa5a7284d7c28b93319e"} Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.436942 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnlsn" podStartSLOduration=3.039746076 podStartE2EDuration="5.436924763s" podCreationTimestamp="2026-01-03 04:21:59 +0000 UTC" firstStartedPulling="2026-01-03 04:22:01.385448638 +0000 UTC m=+348.502501823" lastFinishedPulling="2026-01-03 04:22:03.782627315 +0000 UTC m=+350.899680510" observedRunningTime="2026-01-03 04:22:04.432687423 +0000 UTC m=+351.549740618" watchObservedRunningTime="2026-01-03 04:22:04.436924763 +0000 UTC m=+351.553977948" Jan 03 04:22:04 crc kubenswrapper[4865]: I0103 04:22:04.448008 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgfcx" podStartSLOduration=2.8503849900000002 podStartE2EDuration="5.447990066s" podCreationTimestamp="2026-01-03 04:21:59 +0000 UTC" firstStartedPulling="2026-01-03 04:22:01.378544581 +0000 UTC m=+348.495597806" lastFinishedPulling="2026-01-03 04:22:03.976149697 +0000 UTC m=+351.093202882" observedRunningTime="2026-01-03 04:22:04.447321104 +0000 UTC m=+351.564374289" watchObservedRunningTime="2026-01-03 04:22:04.447990066 +0000 UTC m=+351.565043251" Jan 03 04:22:05 crc kubenswrapper[4865]: I0103 04:22:05.431176 4865 generic.go:334] "Generic (PLEG): container finished" podID="61cc1eff-fb8d-4b15-bc9a-9be54c149447" containerID="c07f2e2568d5d6d5bfd1ba878e9cb7594d9f73b6f394e8a68b6c84e4e59cc45d" exitCode=0 Jan 03 04:22:05 crc kubenswrapper[4865]: I0103 04:22:05.431616 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f87v" event={"ID":"61cc1eff-fb8d-4b15-bc9a-9be54c149447","Type":"ContainerDied","Data":"c07f2e2568d5d6d5bfd1ba878e9cb7594d9f73b6f394e8a68b6c84e4e59cc45d"} Jan 03 04:22:05 crc kubenswrapper[4865]: I0103 04:22:05.433775 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gld9" event={"ID":"44a95df7-b26c-417b-8756-88655a7b34d3","Type":"ContainerStarted","Data":"11116c1c7bdf8439957060118b11e63c762942a37cfd4548996cde2e3cdc66af"} Jan 03 04:22:05 crc kubenswrapper[4865]: I0103 04:22:05.470258 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5gld9" podStartSLOduration=1.981806312 podStartE2EDuration="3.470240428s" podCreationTimestamp="2026-01-03 04:22:02 +0000 UTC" firstStartedPulling="2026-01-03 04:22:03.399547453 +0000 UTC m=+350.516600638" lastFinishedPulling="2026-01-03 04:22:04.887981569 +0000 UTC m=+352.005034754" observedRunningTime="2026-01-03 04:22:05.468665457 +0000 UTC m=+352.585718662" watchObservedRunningTime="2026-01-03 04:22:05.470240428 +0000 UTC m=+352.587293613" Jan 03 04:22:06 crc kubenswrapper[4865]: I0103 04:22:06.440576 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2f87v" event={"ID":"61cc1eff-fb8d-4b15-bc9a-9be54c149447","Type":"ContainerStarted","Data":"36b9b4f95d64d669f4a0a6f1908768d678563e78ee2803e6e459546d9f59641b"} Jan 03 04:22:07 crc kubenswrapper[4865]: I0103 04:22:07.277470 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2hcpf" Jan 03 04:22:07 crc kubenswrapper[4865]: I0103 04:22:07.296216 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2f87v" podStartSLOduration=2.702179771 podStartE2EDuration="5.296197509s" podCreationTimestamp="2026-01-03 04:22:02 +0000 UTC" firstStartedPulling="2026-01-03 04:22:03.401478506 +0000 UTC m=+350.518531691" lastFinishedPulling="2026-01-03 04:22:05.995496244 +0000 UTC m=+353.112549429" observedRunningTime="2026-01-03 04:22:06.467120077 +0000 UTC m=+353.584173262" watchObservedRunningTime="2026-01-03 04:22:07.296197509 +0000 UTC m=+354.413250694" Jan 03 04:22:07 crc kubenswrapper[4865]: I0103 04:22:07.362862 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.059889 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.060496 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.121607 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.284940 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.284979 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.331462 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.516947 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnlsn" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.519488 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgfcx" Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.739680 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:22:10 crc kubenswrapper[4865]: I0103 04:22:10.739748 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.465608 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.465987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.516919 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.650916 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.650973 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:12 crc kubenswrapper[4865]: I0103 04:22:12.696435 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:13 crc kubenswrapper[4865]: I0103 04:22:13.524299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5gld9" Jan 03 04:22:13 crc kubenswrapper[4865]: I0103 04:22:13.526089 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2f87v" Jan 03 04:22:15 crc kubenswrapper[4865]: I0103 04:22:15.425769 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:22:15 crc kubenswrapper[4865]: I0103 04:22:15.425965 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" podUID="3d98d83f-e44f-4dae-be12-6bf375977bc8" containerName="route-controller-manager" containerID="cri-o://fbc48c3abd180bc5e85d1fb8a58816b674d09cdd4121df77c150e50f53c99379" gracePeriod=30 Jan 03 04:22:16 crc kubenswrapper[4865]: I0103 04:22:16.491368 4865 generic.go:334] "Generic (PLEG): container finished" podID="3d98d83f-e44f-4dae-be12-6bf375977bc8" containerID="fbc48c3abd180bc5e85d1fb8a58816b674d09cdd4121df77c150e50f53c99379" exitCode=0 Jan 03 04:22:16 crc kubenswrapper[4865]: I0103 04:22:16.491431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" event={"ID":"3d98d83f-e44f-4dae-be12-6bf375977bc8","Type":"ContainerDied","Data":"fbc48c3abd180bc5e85d1fb8a58816b674d09cdd4121df77c150e50f53c99379"} Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.245646 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.275405 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j"] Jan 03 04:22:17 crc kubenswrapper[4865]: E0103 04:22:17.275669 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d98d83f-e44f-4dae-be12-6bf375977bc8" containerName="route-controller-manager" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.275684 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d98d83f-e44f-4dae-be12-6bf375977bc8" containerName="route-controller-manager" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.275821 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d98d83f-e44f-4dae-be12-6bf375977bc8" containerName="route-controller-manager" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.276319 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.295707 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j"] Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.358610 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config\") pod \"3d98d83f-e44f-4dae-be12-6bf375977bc8\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.358660 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b82r6\" (UniqueName: \"kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6\") pod \"3d98d83f-e44f-4dae-be12-6bf375977bc8\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.358701 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca\") pod \"3d98d83f-e44f-4dae-be12-6bf375977bc8\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.358780 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert\") pod \"3d98d83f-e44f-4dae-be12-6bf375977bc8\" (UID: \"3d98d83f-e44f-4dae-be12-6bf375977bc8\") " Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.360652 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config" (OuterVolumeSpecName: "config") pod "3d98d83f-e44f-4dae-be12-6bf375977bc8" (UID: "3d98d83f-e44f-4dae-be12-6bf375977bc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.360743 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d98d83f-e44f-4dae-be12-6bf375977bc8" (UID: "3d98d83f-e44f-4dae-be12-6bf375977bc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.367136 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d98d83f-e44f-4dae-be12-6bf375977bc8" (UID: "3d98d83f-e44f-4dae-be12-6bf375977bc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.378620 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6" (OuterVolumeSpecName: "kube-api-access-b82r6") pod "3d98d83f-e44f-4dae-be12-6bf375977bc8" (UID: "3d98d83f-e44f-4dae-be12-6bf375977bc8"). InnerVolumeSpecName "kube-api-access-b82r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460402 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-client-ca\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460462 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-config\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460537 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m585\" (UniqueName: \"kubernetes.io/projected/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-kube-api-access-6m585\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460724 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-serving-cert\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460810 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460826 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b82r6\" (UniqueName: \"kubernetes.io/projected/3d98d83f-e44f-4dae-be12-6bf375977bc8-kube-api-access-b82r6\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460837 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d98d83f-e44f-4dae-be12-6bf375977bc8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.460850 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98d83f-e44f-4dae-be12-6bf375977bc8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.497704 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" event={"ID":"3d98d83f-e44f-4dae-be12-6bf375977bc8","Type":"ContainerDied","Data":"bfd1e4142174e5644781a92b66e85946b07452902be5a56d7d9c5bce1f40574d"} Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.497727 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.497752 4865 scope.go:117] "RemoveContainer" containerID="fbc48c3abd180bc5e85d1fb8a58816b674d09cdd4121df77c150e50f53c99379" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.522824 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.528351 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67cc8d88b-nx4ss"] Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.561592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m585\" (UniqueName: \"kubernetes.io/projected/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-kube-api-access-6m585\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.561713 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-serving-cert\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.561753 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-client-ca\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.561780 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-config\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.563175 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-config\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.563570 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-client-ca\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.569156 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-serving-cert\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.584564 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m585\" (UniqueName: \"kubernetes.io/projected/d9b4ced7-9330-43d2-ba77-c5a3694e3af3-kube-api-access-6m585\") pod \"route-controller-manager-68c7f995b4-slk2j\" (UID: \"d9b4ced7-9330-43d2-ba77-c5a3694e3af3\") " pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:17 crc kubenswrapper[4865]: I0103 04:22:17.599222 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:18 crc kubenswrapper[4865]: I0103 04:22:18.067037 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j"] Jan 03 04:22:18 crc kubenswrapper[4865]: I0103 04:22:18.505777 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" event={"ID":"d9b4ced7-9330-43d2-ba77-c5a3694e3af3","Type":"ContainerStarted","Data":"870f652654aace4fac37e9c243ee5317df9c99eaf17565606f5e24f833f77de2"} Jan 03 04:22:19 crc kubenswrapper[4865]: I0103 04:22:19.161837 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d98d83f-e44f-4dae-be12-6bf375977bc8" path="/var/lib/kubelet/pods/3d98d83f-e44f-4dae-be12-6bf375977bc8/volumes" Jan 03 04:22:19 crc kubenswrapper[4865]: I0103 04:22:19.511990 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" event={"ID":"d9b4ced7-9330-43d2-ba77-c5a3694e3af3","Type":"ContainerStarted","Data":"3798518e452d10ad3dc27d620f314fd431cecf91c857f3dab1eb3ff51909e103"} Jan 03 04:22:20 crc kubenswrapper[4865]: I0103 04:22:20.518203 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:20 crc kubenswrapper[4865]: I0103 04:22:20.526697 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" Jan 03 04:22:20 crc kubenswrapper[4865]: I0103 04:22:20.553992 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68c7f995b4-slk2j" podStartSLOduration=5.553947429 podStartE2EDuration="5.553947429s" podCreationTimestamp="2026-01-03 04:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:22:20.540185806 +0000 UTC m=+367.657239031" watchObservedRunningTime="2026-01-03 04:22:20.553947429 +0000 UTC m=+367.671000674" Jan 03 04:22:32 crc kubenswrapper[4865]: I0103 04:22:32.417265 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" podUID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" containerName="registry" containerID="cri-o://6e7318c1c1c84204b324152d7e0478199298925ae93e89e76f27ae63d7d6ee98" gracePeriod=30 Jan 03 04:22:33 crc kubenswrapper[4865]: I0103 04:22:33.606674 4865 generic.go:334] "Generic (PLEG): container finished" podID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" containerID="6e7318c1c1c84204b324152d7e0478199298925ae93e89e76f27ae63d7d6ee98" exitCode=0 Jan 03 04:22:33 crc kubenswrapper[4865]: I0103 04:22:33.606873 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" event={"ID":"b0447a4f-7f4f-41c7-912e-34b7e2b5e077","Type":"ContainerDied","Data":"6e7318c1c1c84204b324152d7e0478199298925ae93e89e76f27ae63d7d6ee98"} Jan 03 04:22:33 crc kubenswrapper[4865]: I0103 04:22:33.940410 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.089539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.089719 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.089780 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.089858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.089893 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.090599 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.090654 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-559q4\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.090696 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca\") pod \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\" (UID: \"b0447a4f-7f4f-41c7-912e-34b7e2b5e077\") " Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.090932 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.091346 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.091500 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.091536 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.095755 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.096008 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.096605 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.097753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4" (OuterVolumeSpecName: "kube-api-access-559q4") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "kube-api-access-559q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.105049 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.106085 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b0447a4f-7f4f-41c7-912e-34b7e2b5e077" (UID: "b0447a4f-7f4f-41c7-912e-34b7e2b5e077"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.193338 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.193436 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.193460 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-559q4\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-kube-api-access-559q4\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.193484 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.193505 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0447a4f-7f4f-41c7-912e-34b7e2b5e077-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.614103 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" event={"ID":"b0447a4f-7f4f-41c7-912e-34b7e2b5e077","Type":"ContainerDied","Data":"5817d51e6089387991609c632e48f8ecd18f1a5a22fd9c6d6b8c4c83722e2a16"} Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.614164 4865 scope.go:117] "RemoveContainer" containerID="6e7318c1c1c84204b324152d7e0478199298925ae93e89e76f27ae63d7d6ee98" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.614184 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hmk4r" Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.650949 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:22:34 crc kubenswrapper[4865]: I0103 04:22:34.659786 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hmk4r"] Jan 03 04:22:35 crc kubenswrapper[4865]: I0103 04:22:35.165239 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" path="/var/lib/kubelet/pods/b0447a4f-7f4f-41c7-912e-34b7e2b5e077/volumes" Jan 03 04:22:40 crc kubenswrapper[4865]: I0103 04:22:40.739705 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:22:40 crc kubenswrapper[4865]: I0103 04:22:40.740439 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:23:10 crc kubenswrapper[4865]: I0103 04:23:10.740043 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:23:10 crc kubenswrapper[4865]: I0103 04:23:10.740752 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:23:10 crc kubenswrapper[4865]: I0103 04:23:10.740818 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:23:10 crc kubenswrapper[4865]: I0103 04:23:10.741646 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:23:10 crc kubenswrapper[4865]: I0103 04:23:10.741747 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e" gracePeriod=600 Jan 03 04:23:11 crc kubenswrapper[4865]: I0103 04:23:11.840697 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e" exitCode=0 Jan 03 04:23:11 crc kubenswrapper[4865]: I0103 04:23:11.840777 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e"} Jan 03 04:23:11 crc kubenswrapper[4865]: I0103 04:23:11.841681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c"} Jan 03 04:23:11 crc kubenswrapper[4865]: I0103 04:23:11.841726 4865 scope.go:117] "RemoveContainer" containerID="2f5d90eedbe0643323d9804d3d76dd9f588a777a9dd30e99e1f914cbc9c9b9ab" Jan 03 04:25:40 crc kubenswrapper[4865]: I0103 04:25:40.739487 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:25:40 crc kubenswrapper[4865]: I0103 04:25:40.740161 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:26:10 crc kubenswrapper[4865]: I0103 04:26:10.739934 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:26:10 crc kubenswrapper[4865]: I0103 04:26:10.740449 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:26:40 crc kubenswrapper[4865]: I0103 04:26:40.740231 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:26:40 crc kubenswrapper[4865]: I0103 04:26:40.740941 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:26:40 crc kubenswrapper[4865]: I0103 04:26:40.741004 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:26:40 crc kubenswrapper[4865]: I0103 04:26:40.741812 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:26:40 crc kubenswrapper[4865]: I0103 04:26:40.741909 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c" gracePeriod=600 Jan 03 04:26:41 crc kubenswrapper[4865]: I0103 04:26:41.247097 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c" exitCode=0 Jan 03 04:26:41 crc kubenswrapper[4865]: I0103 04:26:41.247164 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c"} Jan 03 04:26:41 crc kubenswrapper[4865]: I0103 04:26:41.247466 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8"} Jan 03 04:26:41 crc kubenswrapper[4865]: I0103 04:26:41.247496 4865 scope.go:117] "RemoveContainer" containerID="4e717868cbcec21bcd2ab2f7dcda005af8e3ebb229bcbf85e0a159137ebd2f9e" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.603888 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8"] Jan 03 04:27:03 crc kubenswrapper[4865]: E0103 04:27:03.604697 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" containerName="registry" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.604713 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" containerName="registry" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.604833 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0447a4f-7f4f-41c7-912e-34b7e2b5e077" containerName="registry" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.605252 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.608076 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.608422 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dzn87" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.608431 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.618148 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-wklxs"] Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.618972 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wklxs" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.631646 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zkfgb" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.633420 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8"] Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.639184 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jtp7f"] Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.640484 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.643522 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tbdmr" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.643699 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wklxs"] Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.659047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jtp7f"] Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.773748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmbv\" (UniqueName: \"kubernetes.io/projected/e0da0830-24dd-48df-8f23-a1338aff9d50-kube-api-access-7bmbv\") pod \"cert-manager-858654f9db-wklxs\" (UID: \"e0da0830-24dd-48df-8f23-a1338aff9d50\") " pod="cert-manager/cert-manager-858654f9db-wklxs" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.773873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws89\" (UniqueName: \"kubernetes.io/projected/8b1175af-0486-4c47-8135-1b968223783e-kube-api-access-pws89\") pod \"cert-manager-cainjector-cf98fcc89-jzwg8\" (UID: \"8b1175af-0486-4c47-8135-1b968223783e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.773906 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjh6\" (UniqueName: \"kubernetes.io/projected/f44bcedc-d643-4020-ac35-8777348583ef-kube-api-access-nwjh6\") pod \"cert-manager-webhook-687f57d79b-jtp7f\" (UID: \"f44bcedc-d643-4020-ac35-8777348583ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.874827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjh6\" (UniqueName: \"kubernetes.io/projected/f44bcedc-d643-4020-ac35-8777348583ef-kube-api-access-nwjh6\") pod \"cert-manager-webhook-687f57d79b-jtp7f\" (UID: \"f44bcedc-d643-4020-ac35-8777348583ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.874873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws89\" (UniqueName: \"kubernetes.io/projected/8b1175af-0486-4c47-8135-1b968223783e-kube-api-access-pws89\") pod \"cert-manager-cainjector-cf98fcc89-jzwg8\" (UID: \"8b1175af-0486-4c47-8135-1b968223783e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.874923 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmbv\" (UniqueName: \"kubernetes.io/projected/e0da0830-24dd-48df-8f23-a1338aff9d50-kube-api-access-7bmbv\") pod \"cert-manager-858654f9db-wklxs\" (UID: \"e0da0830-24dd-48df-8f23-a1338aff9d50\") " pod="cert-manager/cert-manager-858654f9db-wklxs" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.898902 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjh6\" (UniqueName: \"kubernetes.io/projected/f44bcedc-d643-4020-ac35-8777348583ef-kube-api-access-nwjh6\") pod \"cert-manager-webhook-687f57d79b-jtp7f\" (UID: \"f44bcedc-d643-4020-ac35-8777348583ef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.904671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmbv\" (UniqueName: \"kubernetes.io/projected/e0da0830-24dd-48df-8f23-a1338aff9d50-kube-api-access-7bmbv\") pod \"cert-manager-858654f9db-wklxs\" (UID: \"e0da0830-24dd-48df-8f23-a1338aff9d50\") " pod="cert-manager/cert-manager-858654f9db-wklxs" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.914563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws89\" (UniqueName: \"kubernetes.io/projected/8b1175af-0486-4c47-8135-1b968223783e-kube-api-access-pws89\") pod \"cert-manager-cainjector-cf98fcc89-jzwg8\" (UID: \"8b1175af-0486-4c47-8135-1b968223783e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.932264 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.937889 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wklxs" Jan 03 04:27:03 crc kubenswrapper[4865]: I0103 04:27:03.961123 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.167194 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8"] Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.187199 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.227731 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wklxs"] Jan 03 04:27:04 crc kubenswrapper[4865]: W0103 04:27:04.233226 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0da0830_24dd_48df_8f23_a1338aff9d50.slice/crio-e931c3601c91ee05bce04d4ab246abc6f70c326dae5eab9ebc7917014c3e3007 WatchSource:0}: Error finding container e931c3601c91ee05bce04d4ab246abc6f70c326dae5eab9ebc7917014c3e3007: Status 404 returned error can't find the container with id e931c3601c91ee05bce04d4ab246abc6f70c326dae5eab9ebc7917014c3e3007 Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.406117 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" event={"ID":"8b1175af-0486-4c47-8135-1b968223783e","Type":"ContainerStarted","Data":"6d1e0cad312db38bcac9993e37c56872877db313bb391bf41b69f50b51893384"} Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.407832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wklxs" event={"ID":"e0da0830-24dd-48df-8f23-a1338aff9d50","Type":"ContainerStarted","Data":"e931c3601c91ee05bce04d4ab246abc6f70c326dae5eab9ebc7917014c3e3007"} Jan 03 04:27:04 crc kubenswrapper[4865]: I0103 04:27:04.447000 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-jtp7f"] Jan 03 04:27:05 crc kubenswrapper[4865]: I0103 04:27:05.422485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" event={"ID":"f44bcedc-d643-4020-ac35-8777348583ef","Type":"ContainerStarted","Data":"3007baecb916a3fc4f3938bd8d39af94925947e4ccbbdb597c0f1a5f0e267b2b"} Jan 03 04:27:08 crc kubenswrapper[4865]: I0103 04:27:08.447036 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" event={"ID":"8b1175af-0486-4c47-8135-1b968223783e","Type":"ContainerStarted","Data":"db197679ea3117346a9d88ee08ecc48aa25fde3633c1bac64faea21dae237664"} Jan 03 04:27:08 crc kubenswrapper[4865]: I0103 04:27:08.448750 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wklxs" event={"ID":"e0da0830-24dd-48df-8f23-a1338aff9d50","Type":"ContainerStarted","Data":"6fc54a248b1484509914b2b24d7581711ea805b52a1035f2e7dbc24e452fd50a"} Jan 03 04:27:08 crc kubenswrapper[4865]: I0103 04:27:08.479423 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jzwg8" podStartSLOduration=2.223056239 podStartE2EDuration="5.479362726s" podCreationTimestamp="2026-01-03 04:27:03 +0000 UTC" firstStartedPulling="2026-01-03 04:27:04.186905426 +0000 UTC m=+651.303958621" lastFinishedPulling="2026-01-03 04:27:07.443211893 +0000 UTC m=+654.560265108" observedRunningTime="2026-01-03 04:27:08.470622342 +0000 UTC m=+655.587675677" watchObservedRunningTime="2026-01-03 04:27:08.479362726 +0000 UTC m=+655.596415941" Jan 03 04:27:08 crc kubenswrapper[4865]: I0103 04:27:08.503193 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-wklxs" podStartSLOduration=2.2186186 podStartE2EDuration="5.503167423s" podCreationTimestamp="2026-01-03 04:27:03 +0000 UTC" firstStartedPulling="2026-01-03 04:27:04.235000935 +0000 UTC m=+651.352054130" lastFinishedPulling="2026-01-03 04:27:07.519549758 +0000 UTC m=+654.636602953" observedRunningTime="2026-01-03 04:27:08.494063679 +0000 UTC m=+655.611116904" watchObservedRunningTime="2026-01-03 04:27:08.503167423 +0000 UTC m=+655.620220648" Jan 03 04:27:09 crc kubenswrapper[4865]: I0103 04:27:09.459055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" event={"ID":"f44bcedc-d643-4020-ac35-8777348583ef","Type":"ContainerStarted","Data":"425b0e4255ab678d078c91c1491f92bf5b0aeae39555e664bdb6251eec87d6fc"} Jan 03 04:27:10 crc kubenswrapper[4865]: I0103 04:27:10.464619 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.364712 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" podStartSLOduration=6.308301857 podStartE2EDuration="10.364681324s" podCreationTimestamp="2026-01-03 04:27:03 +0000 UTC" firstStartedPulling="2026-01-03 04:27:04.449575351 +0000 UTC m=+651.566628566" lastFinishedPulling="2026-01-03 04:27:08.505954848 +0000 UTC m=+655.623008033" observedRunningTime="2026-01-03 04:27:09.500098635 +0000 UTC m=+656.617151860" watchObservedRunningTime="2026-01-03 04:27:13.364681324 +0000 UTC m=+660.481734559" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.370371 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jvxfl"] Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.371323 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-controller" containerID="cri-o://5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.371914 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="sbdb" containerID="cri-o://a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.372074 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="nbdb" containerID="cri-o://e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.372189 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-acl-logging" containerID="cri-o://ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.372200 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-node" containerID="cri-o://e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.372351 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="northd" containerID="cri-o://57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.372425 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.412328 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" containerID="cri-o://e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" gracePeriod=30 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.488148 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/2.log" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.488672 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/1.log" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.488734 4865 generic.go:334] "Generic (PLEG): container finished" podID="2fadcfb6-a571-4d6b-af2d-da885a478206" containerID="5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30" exitCode=2 Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.488765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerDied","Data":"5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30"} Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.488803 4865 scope.go:117] "RemoveContainer" containerID="141a434cea4954bcfda62373aaec40b4f40845ae6138f0a0928894ab6422f8f4" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.489307 4865 scope.go:117] "RemoveContainer" containerID="5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30" Jan 03 04:27:13 crc kubenswrapper[4865]: E0103 04:27:13.489559 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nrhl2_openshift-multus(2fadcfb6-a571-4d6b-af2d-da885a478206)\"" pod="openshift-multus/multus-nrhl2" podUID="2fadcfb6-a571-4d6b-af2d-da885a478206" Jan 03 04:27:13 crc kubenswrapper[4865]: I0103 04:27:13.964848 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-jtp7f" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.075831 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/3.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.078230 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovn-acl-logging/0.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.078893 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovn-controller/0.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.079362 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.149793 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lclqx"] Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150239 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150322 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150396 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="sbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150454 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="sbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150525 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="northd" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150588 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="northd" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150647 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150703 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150775 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150840 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.150900 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.150952 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151016 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-acl-logging" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151068 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-acl-logging" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151119 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="nbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151170 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="nbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151257 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-node" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151312 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-node" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151361 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151446 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151499 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kubecfg-setup" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kubecfg-setup" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151592 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151652 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.151711 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151765 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151917 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.151981 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-node" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152047 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152440 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152505 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="northd" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152579 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152644 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152864 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovn-acl-logging" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.152931 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="sbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.153086 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="nbdb" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.153363 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.153518 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerName="ovnkube-controller" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.155784 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.248650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249025 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.248781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249068 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249133 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249215 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249246 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249278 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhlg\" (UniqueName: \"kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249312 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249314 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket" (OuterVolumeSpecName: "log-socket") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249323 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249341 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249371 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash" (OuterVolumeSpecName: "host-slash") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249416 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249458 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249485 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249522 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log" (OuterVolumeSpecName: "node-log") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249543 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249568 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249571 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249572 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249581 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249616 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249656 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249690 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249699 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd\") pod \"226b5379-0cbe-42e6-b5af-917a5e4b734d\" (UID: \"226b5379-0cbe-42e6-b5af-917a5e4b734d\") " Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249835 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249852 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.249957 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250014 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250076 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250227 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250253 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250272 4865 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250290 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250677 4865 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250693 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250707 4865 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250722 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250738 4865 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250753 4865 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-log-socket\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250767 4865 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250781 4865 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-slash\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250796 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250810 4865 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-node-log\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250824 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250839 4865 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.250853 4865 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.254236 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg" (OuterVolumeSpecName: "kube-api-access-9bhlg") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "kube-api-access-9bhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.256296 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.276577 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "226b5379-0cbe-42e6-b5af-917a5e4b734d" (UID: "226b5379-0cbe-42e6-b5af-917a5e4b734d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.351694 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-var-lib-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.351832 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-log-socket\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.351879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-node-log\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.351936 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-systemd-units\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.351972 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-netd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352084 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-config\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352122 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qph\" (UniqueName: \"kubernetes.io/projected/76224070-cd81-4079-92e6-f9ff5d9311bc-kube-api-access-z2qph\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352161 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352194 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-env-overrides\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352225 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-etc-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352260 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-netns\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352293 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-slash\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352364 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-script-lib\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76224070-cd81-4079-92e6-f9ff5d9311bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352501 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-systemd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352549 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-ovn\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352602 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-kubelet\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-bin\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352704 4865 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/226b5379-0cbe-42e6-b5af-917a5e4b734d-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352728 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/226b5379-0cbe-42e6-b5af-917a5e4b734d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.352750 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhlg\" (UniqueName: \"kubernetes.io/projected/226b5379-0cbe-42e6-b5af-917a5e4b734d-kube-api-access-9bhlg\") on node \"crc\" DevicePath \"\"" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-node-log\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453710 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-systemd-units\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453734 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-netd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453781 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-config\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453829 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2qph\" (UniqueName: \"kubernetes.io/projected/76224070-cd81-4079-92e6-f9ff5d9311bc-kube-api-access-z2qph\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-env-overrides\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453860 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-netd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453915 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-etc-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-etc-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-netns\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454022 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-slash\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76224070-cd81-4079-92e6-f9ff5d9311bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-script-lib\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454205 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-systemd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-ovn\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-kubelet\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454375 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-bin\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-var-lib-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454476 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-log-socket\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-cni-bin\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454675 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-ovn\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454734 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-kubelet\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454773 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-netns\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454825 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-var-lib-openvswitch\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-config\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454829 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-systemd-units\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.453788 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-node-log\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454876 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-slash\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.454995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-log-socket\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.455005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76224070-cd81-4079-92e6-f9ff5d9311bc-run-systemd\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.455333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-env-overrides\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.455687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76224070-cd81-4079-92e6-f9ff5d9311bc-ovnkube-script-lib\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.461101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76224070-cd81-4079-92e6-f9ff5d9311bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.483493 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2qph\" (UniqueName: \"kubernetes.io/projected/76224070-cd81-4079-92e6-f9ff5d9311bc-kube-api-access-z2qph\") pod \"ovnkube-node-lclqx\" (UID: \"76224070-cd81-4079-92e6-f9ff5d9311bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.499712 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovnkube-controller/3.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.503118 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovn-acl-logging/0.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.503968 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jvxfl_226b5379-0cbe-42e6-b5af-917a5e4b734d/ovn-controller/0.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504525 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504571 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504594 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504625 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504644 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504662 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" exitCode=0 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504727 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504735 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" exitCode=143 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504583 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504784 4865 generic.go:334] "Generic (PLEG): container finished" podID="226b5379-0cbe-42e6-b5af-917a5e4b734d" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" exitCode=143 Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504858 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.504834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505221 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505327 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505429 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505491 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505515 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505530 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505546 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505562 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505580 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505596 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505611 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505625 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505649 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505674 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505691 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505706 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505720 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505735 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.505750 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509449 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509457 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509462 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509467 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509484 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509506 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509515 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509520 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509525 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509529 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509536 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509541 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509546 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509553 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509558 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jvxfl" event={"ID":"226b5379-0cbe-42e6-b5af-917a5e4b734d","Type":"ContainerDied","Data":"f78370ef63a51344507d3580af1fbd3a0e32470e8708dd946def0d1e156b59d4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509574 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509580 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509586 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509591 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509597 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509602 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509607 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509612 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509617 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.509622 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.512491 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/2.log" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.560089 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.575492 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jvxfl"] Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.578942 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jvxfl"] Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.589166 4865 scope.go:117] "RemoveContainer" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.614000 4865 scope.go:117] "RemoveContainer" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.628515 4865 scope.go:117] "RemoveContainer" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.645813 4865 scope.go:117] "RemoveContainer" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.658599 4865 scope.go:117] "RemoveContainer" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.678106 4865 scope.go:117] "RemoveContainer" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.695036 4865 scope.go:117] "RemoveContainer" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.709716 4865 scope.go:117] "RemoveContainer" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.742830 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.743695 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.743760 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} err="failed to get container status \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.743800 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.744214 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": container with ID starting with 1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d not found: ID does not exist" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.744251 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} err="failed to get container status \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": rpc error: code = NotFound desc = could not find container \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": container with ID starting with 1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.744274 4865 scope.go:117] "RemoveContainer" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.744691 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": container with ID starting with a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5 not found: ID does not exist" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.744756 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} err="failed to get container status \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": rpc error: code = NotFound desc = could not find container \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": container with ID starting with a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.744781 4865 scope.go:117] "RemoveContainer" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.745106 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": container with ID starting with e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412 not found: ID does not exist" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745143 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} err="failed to get container status \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": rpc error: code = NotFound desc = could not find container \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": container with ID starting with e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745165 4865 scope.go:117] "RemoveContainer" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.745568 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": container with ID starting with 57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5 not found: ID does not exist" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745606 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} err="failed to get container status \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": rpc error: code = NotFound desc = could not find container \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": container with ID starting with 57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745627 4865 scope.go:117] "RemoveContainer" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.745862 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": container with ID starting with a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9 not found: ID does not exist" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745888 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} err="failed to get container status \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": rpc error: code = NotFound desc = could not find container \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": container with ID starting with a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.745909 4865 scope.go:117] "RemoveContainer" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.746241 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": container with ID starting with e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069 not found: ID does not exist" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.746281 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} err="failed to get container status \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": rpc error: code = NotFound desc = could not find container \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": container with ID starting with e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.746425 4865 scope.go:117] "RemoveContainer" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.746798 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": container with ID starting with ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4 not found: ID does not exist" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.746834 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} err="failed to get container status \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": rpc error: code = NotFound desc = could not find container \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": container with ID starting with ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.746867 4865 scope.go:117] "RemoveContainer" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.747283 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": container with ID starting with 5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415 not found: ID does not exist" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.747319 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} err="failed to get container status \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": rpc error: code = NotFound desc = could not find container \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": container with ID starting with 5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.747343 4865 scope.go:117] "RemoveContainer" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: E0103 04:27:14.747676 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": container with ID starting with de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca not found: ID does not exist" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.747703 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} err="failed to get container status \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": rpc error: code = NotFound desc = could not find container \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": container with ID starting with de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.747724 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.748048 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} err="failed to get container status \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.748073 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.748417 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} err="failed to get container status \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": rpc error: code = NotFound desc = could not find container \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": container with ID starting with 1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.748444 4865 scope.go:117] "RemoveContainer" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.748965 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} err="failed to get container status \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": rpc error: code = NotFound desc = could not find container \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": container with ID starting with a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.749013 4865 scope.go:117] "RemoveContainer" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.749459 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} err="failed to get container status \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": rpc error: code = NotFound desc = could not find container \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": container with ID starting with e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.749489 4865 scope.go:117] "RemoveContainer" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.749929 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} err="failed to get container status \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": rpc error: code = NotFound desc = could not find container \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": container with ID starting with 57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.749967 4865 scope.go:117] "RemoveContainer" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.750361 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} err="failed to get container status \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": rpc error: code = NotFound desc = could not find container \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": container with ID starting with a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.750402 4865 scope.go:117] "RemoveContainer" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.750739 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} err="failed to get container status \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": rpc error: code = NotFound desc = could not find container \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": container with ID starting with e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.750777 4865 scope.go:117] "RemoveContainer" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751080 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} err="failed to get container status \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": rpc error: code = NotFound desc = could not find container \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": container with ID starting with ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751108 4865 scope.go:117] "RemoveContainer" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751455 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} err="failed to get container status \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": rpc error: code = NotFound desc = could not find container \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": container with ID starting with 5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751478 4865 scope.go:117] "RemoveContainer" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751934 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} err="failed to get container status \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": rpc error: code = NotFound desc = could not find container \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": container with ID starting with de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.751972 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.752280 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} err="failed to get container status \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.752311 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.752641 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} err="failed to get container status \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": rpc error: code = NotFound desc = could not find container \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": container with ID starting with 1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.752676 4865 scope.go:117] "RemoveContainer" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.752972 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} err="failed to get container status \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": rpc error: code = NotFound desc = could not find container \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": container with ID starting with a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.753006 4865 scope.go:117] "RemoveContainer" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.753305 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} err="failed to get container status \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": rpc error: code = NotFound desc = could not find container \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": container with ID starting with e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.753326 4865 scope.go:117] "RemoveContainer" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.753666 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} err="failed to get container status \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": rpc error: code = NotFound desc = could not find container \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": container with ID starting with 57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.753682 4865 scope.go:117] "RemoveContainer" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754046 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} err="failed to get container status \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": rpc error: code = NotFound desc = could not find container \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": container with ID starting with a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754072 4865 scope.go:117] "RemoveContainer" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754427 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} err="failed to get container status \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": rpc error: code = NotFound desc = could not find container \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": container with ID starting with e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754451 4865 scope.go:117] "RemoveContainer" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754733 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} err="failed to get container status \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": rpc error: code = NotFound desc = could not find container \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": container with ID starting with ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.754750 4865 scope.go:117] "RemoveContainer" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755065 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} err="failed to get container status \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": rpc error: code = NotFound desc = could not find container \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": container with ID starting with 5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755114 4865 scope.go:117] "RemoveContainer" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755543 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} err="failed to get container status \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": rpc error: code = NotFound desc = could not find container \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": container with ID starting with de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755571 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755969 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} err="failed to get container status \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.755998 4865 scope.go:117] "RemoveContainer" containerID="1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.756303 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d"} err="failed to get container status \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": rpc error: code = NotFound desc = could not find container \"1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d\": container with ID starting with 1504ba1b98dcfcc06f389566cc8da22f2047d58c7ec2cf76cd2c71038bcd0a7d not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.756335 4865 scope.go:117] "RemoveContainer" containerID="a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.756719 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5"} err="failed to get container status \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": rpc error: code = NotFound desc = could not find container \"a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5\": container with ID starting with a8564dd402bceec6d6c1459076ab58141e05dbd0f2fc4acdf7c3e6a0c3d288f5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.756744 4865 scope.go:117] "RemoveContainer" containerID="e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757082 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412"} err="failed to get container status \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": rpc error: code = NotFound desc = could not find container \"e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412\": container with ID starting with e510a2a3d305872fff4b808d6dc42dd0916e368e80d1c26c4a250fdf2bc57412 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757127 4865 scope.go:117] "RemoveContainer" containerID="57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757479 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5"} err="failed to get container status \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": rpc error: code = NotFound desc = could not find container \"57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5\": container with ID starting with 57f16c1494204fb470f35a7d584ad57795ec8e87964bb47b2ec348a61db51ed5 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757513 4865 scope.go:117] "RemoveContainer" containerID="a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757828 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9"} err="failed to get container status \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": rpc error: code = NotFound desc = could not find container \"a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9\": container with ID starting with a4ef646c0b661d1d9ec0f3f07afaf885449f30c60de6baab15afaa940be544b9 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.757858 4865 scope.go:117] "RemoveContainer" containerID="e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758173 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069"} err="failed to get container status \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": rpc error: code = NotFound desc = could not find container \"e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069\": container with ID starting with e4219e9d747c1171f8a8ad3719ce08cdeec40fc446b446990e28cb2fa33f9069 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758196 4865 scope.go:117] "RemoveContainer" containerID="ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758531 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4"} err="failed to get container status \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": rpc error: code = NotFound desc = could not find container \"ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4\": container with ID starting with ffe4dadc55a75f9f908c06dfd9b4431e309be574758f7432040c9d9e60992cf4 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758569 4865 scope.go:117] "RemoveContainer" containerID="5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758901 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415"} err="failed to get container status \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": rpc error: code = NotFound desc = could not find container \"5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415\": container with ID starting with 5985f92d84d85a594ca0a011cd3f40d24b73bb7ee95e0ec0901274869cb1a415 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.758937 4865 scope.go:117] "RemoveContainer" containerID="de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.759247 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca"} err="failed to get container status \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": rpc error: code = NotFound desc = could not find container \"de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca\": container with ID starting with de6f2b4cc8e3400fd68be92dae9978ae1161313e154548b16caaadab9fbb35ca not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.759279 4865 scope.go:117] "RemoveContainer" containerID="e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.759624 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09"} err="failed to get container status \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": rpc error: code = NotFound desc = could not find container \"e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09\": container with ID starting with e56536dc5bf08034dec78e53e3385bf24fe9b9c5fbb6a4863c1bfa127c2e8c09 not found: ID does not exist" Jan 03 04:27:14 crc kubenswrapper[4865]: I0103 04:27:14.771225 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:14 crc kubenswrapper[4865]: W0103 04:27:14.793419 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76224070_cd81_4079_92e6_f9ff5d9311bc.slice/crio-e72d5d3ab3685d8e412a8162b363d153c1e3b12f905dd4e650ada067c4db7e49 WatchSource:0}: Error finding container e72d5d3ab3685d8e412a8162b363d153c1e3b12f905dd4e650ada067c4db7e49: Status 404 returned error can't find the container with id e72d5d3ab3685d8e412a8162b363d153c1e3b12f905dd4e650ada067c4db7e49 Jan 03 04:27:15 crc kubenswrapper[4865]: I0103 04:27:15.169898 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226b5379-0cbe-42e6-b5af-917a5e4b734d" path="/var/lib/kubelet/pods/226b5379-0cbe-42e6-b5af-917a5e4b734d/volumes" Jan 03 04:27:15 crc kubenswrapper[4865]: I0103 04:27:15.519968 4865 generic.go:334] "Generic (PLEG): container finished" podID="76224070-cd81-4079-92e6-f9ff5d9311bc" containerID="3cc97d2c7fb76012104224ccbc6103b92ce6cd8ff22be15afcf24d2baee31e6c" exitCode=0 Jan 03 04:27:15 crc kubenswrapper[4865]: I0103 04:27:15.520098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerDied","Data":"3cc97d2c7fb76012104224ccbc6103b92ce6cd8ff22be15afcf24d2baee31e6c"} Jan 03 04:27:15 crc kubenswrapper[4865]: I0103 04:27:15.520138 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"e72d5d3ab3685d8e412a8162b363d153c1e3b12f905dd4e650ada067c4db7e49"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.532029 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"4d1f2f5a50745bea5b984c2506c92512e708da6983f0ad4be0d1a80507b82f58"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.533009 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"936a40a60fc4107fd2488b14eaafcdf744d0f77902d7e9fd18636a0c161871b4"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.533047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"44eea20e5439a4841afb262d5a17c1450015a7bf7432a319a59f29e2ef3cb803"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.533074 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"eba6b5651ce4299eee28b6ddb0e3185dda67d4915dbe847f8d1632a50c1b610f"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.533100 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"3e0d7a50b235f8d6d6f22a358bb024f463da7aa3949a127ce2497b7843c0f756"} Jan 03 04:27:16 crc kubenswrapper[4865]: I0103 04:27:16.533124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"56dea01098b9ee750717c1a070945208760359ae91d543d27f315961428a4098"} Jan 03 04:27:19 crc kubenswrapper[4865]: I0103 04:27:19.558757 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"240136abaec01424b1fd608d3de0ca199b9e1c3d9923f1bce050b272933512ea"} Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.577756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" event={"ID":"76224070-cd81-4079-92e6-f9ff5d9311bc","Type":"ContainerStarted","Data":"60eba06e87cdb0961b4542b4691a67b0d4da8cac4f6304f191de4aee7fbb2ce2"} Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.578269 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.578299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.578319 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.620683 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.621612 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:21 crc kubenswrapper[4865]: I0103 04:27:21.630255 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" podStartSLOduration=7.630237458 podStartE2EDuration="7.630237458s" podCreationTimestamp="2026-01-03 04:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:27:21.622702717 +0000 UTC m=+668.739755952" watchObservedRunningTime="2026-01-03 04:27:21.630237458 +0000 UTC m=+668.747290643" Jan 03 04:27:28 crc kubenswrapper[4865]: I0103 04:27:28.156284 4865 scope.go:117] "RemoveContainer" containerID="5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30" Jan 03 04:27:28 crc kubenswrapper[4865]: E0103 04:27:28.157312 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nrhl2_openshift-multus(2fadcfb6-a571-4d6b-af2d-da885a478206)\"" pod="openshift-multus/multus-nrhl2" podUID="2fadcfb6-a571-4d6b-af2d-da885a478206" Jan 03 04:27:40 crc kubenswrapper[4865]: I0103 04:27:40.155359 4865 scope.go:117] "RemoveContainer" containerID="5f51ac2adbceb834fc3a6428c9be6afad0e378157dfa78c123bf38f0332c7c30" Jan 03 04:27:40 crc kubenswrapper[4865]: I0103 04:27:40.704196 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nrhl2_2fadcfb6-a571-4d6b-af2d-da885a478206/kube-multus/2.log" Jan 03 04:27:40 crc kubenswrapper[4865]: I0103 04:27:40.704980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nrhl2" event={"ID":"2fadcfb6-a571-4d6b-af2d-da885a478206","Type":"ContainerStarted","Data":"3cee03efda8f4c0a76b3f5297b3ca5c49ed51ac9954ad6449612325658f9d716"} Jan 03 04:27:44 crc kubenswrapper[4865]: I0103 04:27:44.884087 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lclqx" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.017830 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42"] Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.019100 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.022436 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.036433 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42"] Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.129439 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.129676 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.129725 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkqf\" (UniqueName: \"kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.231474 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.231891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkqf\" (UniqueName: \"kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.232006 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.232465 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.232762 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.258686 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkqf\" (UniqueName: \"kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.345063 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:27:56 crc kubenswrapper[4865]: I0103 04:27:56.830445 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42"] Jan 03 04:27:56 crc kubenswrapper[4865]: W0103 04:27:56.836007 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfe6b8e2_59c4_41ac_9665_54fab9d47829.slice/crio-e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61 WatchSource:0}: Error finding container e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61: Status 404 returned error can't find the container with id e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61 Jan 03 04:27:57 crc kubenswrapper[4865]: I0103 04:27:57.817108 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerID="9968ef39a9e73e847e14dbb488fdde5f6a72c59d3e0da59c7a93865ab35b118a" exitCode=0 Jan 03 04:27:57 crc kubenswrapper[4865]: I0103 04:27:57.817167 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" event={"ID":"cfe6b8e2-59c4-41ac-9665-54fab9d47829","Type":"ContainerDied","Data":"9968ef39a9e73e847e14dbb488fdde5f6a72c59d3e0da59c7a93865ab35b118a"} Jan 03 04:27:57 crc kubenswrapper[4865]: I0103 04:27:57.817203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" event={"ID":"cfe6b8e2-59c4-41ac-9665-54fab9d47829","Type":"ContainerStarted","Data":"e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61"} Jan 03 04:27:59 crc kubenswrapper[4865]: I0103 04:27:59.833431 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerID="226ccc3ddd1b44567227b1d776ea81b819970c416c34c92fc6e26fae372695ca" exitCode=0 Jan 03 04:27:59 crc kubenswrapper[4865]: I0103 04:27:59.833698 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" event={"ID":"cfe6b8e2-59c4-41ac-9665-54fab9d47829","Type":"ContainerDied","Data":"226ccc3ddd1b44567227b1d776ea81b819970c416c34c92fc6e26fae372695ca"} Jan 03 04:28:00 crc kubenswrapper[4865]: I0103 04:28:00.844583 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerID="473b22aa9b6c93f8cd8a72106d903b22f3016f5058a91707d4a997d1fcdd30a2" exitCode=0 Jan 03 04:28:00 crc kubenswrapper[4865]: I0103 04:28:00.844679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" event={"ID":"cfe6b8e2-59c4-41ac-9665-54fab9d47829","Type":"ContainerDied","Data":"473b22aa9b6c93f8cd8a72106d903b22f3016f5058a91707d4a997d1fcdd30a2"} Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.212355 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.318462 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qkqf\" (UniqueName: \"kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf\") pod \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.318882 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util\") pod \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.318939 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle\") pod \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\" (UID: \"cfe6b8e2-59c4-41ac-9665-54fab9d47829\") " Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.319626 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle" (OuterVolumeSpecName: "bundle") pod "cfe6b8e2-59c4-41ac-9665-54fab9d47829" (UID: "cfe6b8e2-59c4-41ac-9665-54fab9d47829"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.327753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf" (OuterVolumeSpecName: "kube-api-access-2qkqf") pod "cfe6b8e2-59c4-41ac-9665-54fab9d47829" (UID: "cfe6b8e2-59c4-41ac-9665-54fab9d47829"). InnerVolumeSpecName "kube-api-access-2qkqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.421295 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qkqf\" (UniqueName: \"kubernetes.io/projected/cfe6b8e2-59c4-41ac-9665-54fab9d47829-kube-api-access-2qkqf\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.421345 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.577262 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util" (OuterVolumeSpecName: "util") pod "cfe6b8e2-59c4-41ac-9665-54fab9d47829" (UID: "cfe6b8e2-59c4-41ac-9665-54fab9d47829"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.624174 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cfe6b8e2-59c4-41ac-9665-54fab9d47829-util\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.867816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" event={"ID":"cfe6b8e2-59c4-41ac-9665-54fab9d47829","Type":"ContainerDied","Data":"e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61"} Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.867885 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4af48730aeae780f49252ac884b96a87e07b85d6430f4fcb06a9595d3f87e61" Jan 03 04:28:02 crc kubenswrapper[4865]: I0103 04:28:02.867926 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.121635 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-hbxcc"] Jan 03 04:28:05 crc kubenswrapper[4865]: E0103 04:28:05.122241 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="util" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.122260 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="util" Jan 03 04:28:05 crc kubenswrapper[4865]: E0103 04:28:05.122294 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="extract" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.122307 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="extract" Jan 03 04:28:05 crc kubenswrapper[4865]: E0103 04:28:05.122335 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="pull" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.122348 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="pull" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.122536 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe6b8e2-59c4-41ac-9665-54fab9d47829" containerName="extract" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.123084 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.127234 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.127482 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.127803 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wcm4t" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.137682 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-hbxcc"] Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.265974 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krltk\" (UniqueName: \"kubernetes.io/projected/d09a2e94-e2b5-4780-9269-564415d6627a-kube-api-access-krltk\") pod \"nmstate-operator-6769fb99d-hbxcc\" (UID: \"d09a2e94-e2b5-4780-9269-564415d6627a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.367470 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krltk\" (UniqueName: \"kubernetes.io/projected/d09a2e94-e2b5-4780-9269-564415d6627a-kube-api-access-krltk\") pod \"nmstate-operator-6769fb99d-hbxcc\" (UID: \"d09a2e94-e2b5-4780-9269-564415d6627a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.397949 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krltk\" (UniqueName: \"kubernetes.io/projected/d09a2e94-e2b5-4780-9269-564415d6627a-kube-api-access-krltk\") pod \"nmstate-operator-6769fb99d-hbxcc\" (UID: \"d09a2e94-e2b5-4780-9269-564415d6627a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.439735 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" Jan 03 04:28:05 crc kubenswrapper[4865]: I0103 04:28:05.910075 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-hbxcc"] Jan 03 04:28:06 crc kubenswrapper[4865]: I0103 04:28:06.893176 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" event={"ID":"d09a2e94-e2b5-4780-9269-564415d6627a","Type":"ContainerStarted","Data":"b4c83d4df0036a301015c0858865f66c3df728e0d61e8f7a127746a01afb1d26"} Jan 03 04:28:08 crc kubenswrapper[4865]: I0103 04:28:08.907871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" event={"ID":"d09a2e94-e2b5-4780-9269-564415d6627a","Type":"ContainerStarted","Data":"cd153194ee0709997d10bf353374f48f2b04428bc0be724b2c2f0b0121cec84e"} Jan 03 04:28:08 crc kubenswrapper[4865]: I0103 04:28:08.940558 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-hbxcc" podStartSLOduration=1.487569947 podStartE2EDuration="3.940533467s" podCreationTimestamp="2026-01-03 04:28:05 +0000 UTC" firstStartedPulling="2026-01-03 04:28:05.920949995 +0000 UTC m=+713.038003180" lastFinishedPulling="2026-01-03 04:28:08.373913475 +0000 UTC m=+715.490966700" observedRunningTime="2026-01-03 04:28:08.929888009 +0000 UTC m=+716.046941254" watchObservedRunningTime="2026-01-03 04:28:08.940533467 +0000 UTC m=+716.057586692" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.141974 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.143269 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.146744 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bq4sl" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.153235 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.154073 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.155395 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.158619 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.186509 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hvkxm"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.189411 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.189572 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.215248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kql\" (UniqueName: \"kubernetes.io/projected/386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e-kube-api-access-g9kql\") pod \"nmstate-metrics-7f7f7578db-7c2qx\" (UID: \"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.215312 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgxz8\" (UniqueName: \"kubernetes.io/projected/44d7b71d-b005-4033-8bda-db39169f98a4-kube-api-access-qgxz8\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.215357 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.286938 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.287788 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.289994 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.290182 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.290293 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f6cbv" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.306007 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316304 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-dbus-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9kql\" (UniqueName: \"kubernetes.io/projected/386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e-kube-api-access-g9kql\") pod \"nmstate-metrics-7f7f7578db-7c2qx\" (UID: \"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316536 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kj5\" (UniqueName: \"kubernetes.io/projected/5d65eeb2-6f33-4e5c-8470-f654c785e04f-kube-api-access-s7kj5\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316565 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgxz8\" (UniqueName: \"kubernetes.io/projected/44d7b71d-b005-4033-8bda-db39169f98a4-kube-api-access-qgxz8\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316620 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-ovs-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316649 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.316693 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-nmstate-lock\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: E0103 04:28:16.316825 4865 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 03 04:28:16 crc kubenswrapper[4865]: E0103 04:28:16.316910 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair podName:44d7b71d-b005-4033-8bda-db39169f98a4 nodeName:}" failed. No retries permitted until 2026-01-03 04:28:16.816883686 +0000 UTC m=+723.933936891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair") pod "nmstate-webhook-f8fb84555-r6w2b" (UID: "44d7b71d-b005-4033-8bda-db39169f98a4") : secret "openshift-nmstate-webhook" not found Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.339182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgxz8\" (UniqueName: \"kubernetes.io/projected/44d7b71d-b005-4033-8bda-db39169f98a4-kube-api-access-qgxz8\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.339285 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9kql\" (UniqueName: \"kubernetes.io/projected/386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e-kube-api-access-g9kql\") pod \"nmstate-metrics-7f7f7578db-7c2qx\" (UID: \"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-dbus-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418291 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418316 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjd2m\" (UniqueName: \"kubernetes.io/projected/751c8c50-a8f2-456e-95d9-40d6e80de893-kube-api-access-jjd2m\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418351 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kj5\" (UniqueName: \"kubernetes.io/projected/5d65eeb2-6f33-4e5c-8470-f654c785e04f-kube-api-access-s7kj5\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418398 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-ovs-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/751c8c50-a8f2-456e-95d9-40d6e80de893-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418458 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-nmstate-lock\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418497 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-ovs-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-nmstate-lock\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.418645 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d65eeb2-6f33-4e5c-8470-f654c785e04f-dbus-socket\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.436907 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kj5\" (UniqueName: \"kubernetes.io/projected/5d65eeb2-6f33-4e5c-8470-f654c785e04f-kube-api-access-s7kj5\") pod \"nmstate-handler-hvkxm\" (UID: \"5d65eeb2-6f33-4e5c-8470-f654c785e04f\") " pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.464822 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.478795 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d9cbf9545-c5wbh"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.479802 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.496751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d9cbf9545-c5wbh"] Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.503445 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519038 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519076 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjd2m\" (UniqueName: \"kubernetes.io/projected/751c8c50-a8f2-456e-95d9-40d6e80de893-kube-api-access-jjd2m\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-trusted-ca-bundle\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519139 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9v6n\" (UniqueName: \"kubernetes.io/projected/269c4d56-2264-4f28-b6a9-6f34d1d8328f-kube-api-access-v9v6n\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519161 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-oauth-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/751c8c50-a8f2-456e-95d9-40d6e80de893-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-oauth-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: E0103 04:28:16.519329 4865 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: E0103 04:28:16.519460 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert podName:751c8c50-a8f2-456e-95d9-40d6e80de893 nodeName:}" failed. No retries permitted until 2026-01-03 04:28:17.019414381 +0000 UTC m=+724.136467666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-s4wgm" (UID: "751c8c50-a8f2-456e-95d9-40d6e80de893") : secret "plugin-serving-cert" not found Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519495 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-service-ca\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.519563 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.521001 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/751c8c50-a8f2-456e-95d9-40d6e80de893-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: W0103 04:28:16.533023 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d65eeb2_6f33_4e5c_8470_f654c785e04f.slice/crio-3eadae81486956345bd0ba73a6c9bb9083dfbf8682a70c0840b3d7e02a623cb5 WatchSource:0}: Error finding container 3eadae81486956345bd0ba73a6c9bb9083dfbf8682a70c0840b3d7e02a623cb5: Status 404 returned error can't find the container with id 3eadae81486956345bd0ba73a6c9bb9083dfbf8682a70c0840b3d7e02a623cb5 Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.539043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjd2m\" (UniqueName: \"kubernetes.io/projected/751c8c50-a8f2-456e-95d9-40d6e80de893-kube-api-access-jjd2m\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.620973 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9v6n\" (UniqueName: \"kubernetes.io/projected/269c4d56-2264-4f28-b6a9-6f34d1d8328f-kube-api-access-v9v6n\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-oauth-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-oauth-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621108 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-service-ca\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.621161 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-trusted-ca-bundle\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.622510 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-oauth-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.622678 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-trusted-ca-bundle\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.623181 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.623182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/269c4d56-2264-4f28-b6a9-6f34d1d8328f-service-ca\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.627853 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-serving-cert\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.632454 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/269c4d56-2264-4f28-b6a9-6f34d1d8328f-console-oauth-config\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.638808 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9v6n\" (UniqueName: \"kubernetes.io/projected/269c4d56-2264-4f28-b6a9-6f34d1d8328f-kube-api-access-v9v6n\") pod \"console-6d9cbf9545-c5wbh\" (UID: \"269c4d56-2264-4f28-b6a9-6f34d1d8328f\") " pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.690162 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx"] Jan 03 04:28:16 crc kubenswrapper[4865]: W0103 04:28:16.694346 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod386bf1ab_ac9c_4ee2_8c8f_e57b98bc590e.slice/crio-628897e287c0769f5234ebbcc8244c5085b6b769c8861624c1febb3598f6955a WatchSource:0}: Error finding container 628897e287c0769f5234ebbcc8244c5085b6b769c8861624c1febb3598f6955a: Status 404 returned error can't find the container with id 628897e287c0769f5234ebbcc8244c5085b6b769c8861624c1febb3598f6955a Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.821651 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.822598 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.827717 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/44d7b71d-b005-4033-8bda-db39169f98a4-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-r6w2b\" (UID: \"44d7b71d-b005-4033-8bda-db39169f98a4\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.964371 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" event={"ID":"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e","Type":"ContainerStarted","Data":"628897e287c0769f5234ebbcc8244c5085b6b769c8861624c1febb3598f6955a"} Jan 03 04:28:16 crc kubenswrapper[4865]: I0103 04:28:16.965528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hvkxm" event={"ID":"5d65eeb2-6f33-4e5c-8470-f654c785e04f","Type":"ContainerStarted","Data":"3eadae81486956345bd0ba73a6c9bb9083dfbf8682a70c0840b3d7e02a623cb5"} Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.026065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.059949 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/751c8c50-a8f2-456e-95d9-40d6e80de893-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-s4wgm\" (UID: \"751c8c50-a8f2-456e-95d9-40d6e80de893\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.074757 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.099865 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d9cbf9545-c5wbh"] Jan 03 04:28:17 crc kubenswrapper[4865]: W0103 04:28:17.113636 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269c4d56_2264_4f28_b6a9_6f34d1d8328f.slice/crio-8489e0213d61d32fc196603bc547c950975538ea8a37358f0ba05b40ed05e683 WatchSource:0}: Error finding container 8489e0213d61d32fc196603bc547c950975538ea8a37358f0ba05b40ed05e683: Status 404 returned error can't find the container with id 8489e0213d61d32fc196603bc547c950975538ea8a37358f0ba05b40ed05e683 Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.199010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.329306 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b"] Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.438100 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm"] Jan 03 04:28:17 crc kubenswrapper[4865]: W0103 04:28:17.445425 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751c8c50_a8f2_456e_95d9_40d6e80de893.slice/crio-0b965068646337ddbf2d2ce61a670836f67c7f0a502d2ab3a0dd01b8b8b13b4a WatchSource:0}: Error finding container 0b965068646337ddbf2d2ce61a670836f67c7f0a502d2ab3a0dd01b8b8b13b4a: Status 404 returned error can't find the container with id 0b965068646337ddbf2d2ce61a670836f67c7f0a502d2ab3a0dd01b8b8b13b4a Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.973033 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" event={"ID":"44d7b71d-b005-4033-8bda-db39169f98a4","Type":"ContainerStarted","Data":"102261685a11ffd1d92d4089168160a2af10ca2358a3704931c47a470c22cae4"} Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.975067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9cbf9545-c5wbh" event={"ID":"269c4d56-2264-4f28-b6a9-6f34d1d8328f","Type":"ContainerStarted","Data":"ed503b21da04df45b52c83b49530c7cafa6f7b4e573c9b1dc31b83c3619db833"} Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.975113 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d9cbf9545-c5wbh" event={"ID":"269c4d56-2264-4f28-b6a9-6f34d1d8328f","Type":"ContainerStarted","Data":"8489e0213d61d32fc196603bc547c950975538ea8a37358f0ba05b40ed05e683"} Jan 03 04:28:17 crc kubenswrapper[4865]: I0103 04:28:17.976298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" event={"ID":"751c8c50-a8f2-456e-95d9-40d6e80de893","Type":"ContainerStarted","Data":"0b965068646337ddbf2d2ce61a670836f67c7f0a502d2ab3a0dd01b8b8b13b4a"} Jan 03 04:28:18 crc kubenswrapper[4865]: I0103 04:28:18.011245 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d9cbf9545-c5wbh" podStartSLOduration=2.011227602 podStartE2EDuration="2.011227602s" podCreationTimestamp="2026-01-03 04:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:28:18.009491186 +0000 UTC m=+725.126544371" watchObservedRunningTime="2026-01-03 04:28:18.011227602 +0000 UTC m=+725.128280787" Jan 03 04:28:19 crc kubenswrapper[4865]: I0103 04:28:19.997279 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hvkxm" event={"ID":"5d65eeb2-6f33-4e5c-8470-f654c785e04f","Type":"ContainerStarted","Data":"9b5d692764a8f8b3ebeb8ce6aadc51b8d35e232cb1783ac0acd490d6f825cc80"} Jan 03 04:28:19 crc kubenswrapper[4865]: I0103 04:28:19.997718 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:20 crc kubenswrapper[4865]: I0103 04:28:20.001653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" event={"ID":"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e","Type":"ContainerStarted","Data":"b252a15dca87ea1adb55f66c59af5d2f4426e87bd931dc68d5b4816ab3a08a38"} Jan 03 04:28:20 crc kubenswrapper[4865]: I0103 04:28:20.005403 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" event={"ID":"44d7b71d-b005-4033-8bda-db39169f98a4","Type":"ContainerStarted","Data":"92135839779ab7b813ef95749cfecdd42c7d2c8dcc29b478e658f6e301a609e1"} Jan 03 04:28:20 crc kubenswrapper[4865]: I0103 04:28:20.005646 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:20 crc kubenswrapper[4865]: I0103 04:28:20.024966 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hvkxm" podStartSLOduration=1.641758642 podStartE2EDuration="4.0248637s" podCreationTimestamp="2026-01-03 04:28:16 +0000 UTC" firstStartedPulling="2026-01-03 04:28:16.535099544 +0000 UTC m=+723.652152729" lastFinishedPulling="2026-01-03 04:28:18.918204602 +0000 UTC m=+726.035257787" observedRunningTime="2026-01-03 04:28:20.018655482 +0000 UTC m=+727.135708727" watchObservedRunningTime="2026-01-03 04:28:20.0248637 +0000 UTC m=+727.141916915" Jan 03 04:28:20 crc kubenswrapper[4865]: I0103 04:28:20.045055 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" podStartSLOduration=2.465889969 podStartE2EDuration="4.045015033s" podCreationTimestamp="2026-01-03 04:28:16 +0000 UTC" firstStartedPulling="2026-01-03 04:28:17.346581341 +0000 UTC m=+724.463634546" lastFinishedPulling="2026-01-03 04:28:18.925706425 +0000 UTC m=+726.042759610" observedRunningTime="2026-01-03 04:28:20.041247231 +0000 UTC m=+727.158300416" watchObservedRunningTime="2026-01-03 04:28:20.045015033 +0000 UTC m=+727.162068308" Jan 03 04:28:21 crc kubenswrapper[4865]: I0103 04:28:21.018506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" event={"ID":"751c8c50-a8f2-456e-95d9-40d6e80de893","Type":"ContainerStarted","Data":"5287c038447b3b9714670feb80da8a96de9bf3207db94904b880f363735b14ff"} Jan 03 04:28:21 crc kubenswrapper[4865]: I0103 04:28:21.045691 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-s4wgm" podStartSLOduration=2.417140916 podStartE2EDuration="5.045666045s" podCreationTimestamp="2026-01-03 04:28:16 +0000 UTC" firstStartedPulling="2026-01-03 04:28:17.447621102 +0000 UTC m=+724.564674287" lastFinishedPulling="2026-01-03 04:28:20.076146221 +0000 UTC m=+727.193199416" observedRunningTime="2026-01-03 04:28:21.040853635 +0000 UTC m=+728.157906850" watchObservedRunningTime="2026-01-03 04:28:21.045666045 +0000 UTC m=+728.162719230" Jan 03 04:28:23 crc kubenswrapper[4865]: I0103 04:28:23.034061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" event={"ID":"386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e","Type":"ContainerStarted","Data":"f4eeb20a6705a23f0b326a829b33f0b4bfb5031a7bcc49cff51129673235218e"} Jan 03 04:28:23 crc kubenswrapper[4865]: I0103 04:28:23.059606 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-7c2qx" podStartSLOduration=1.8398257359999999 podStartE2EDuration="7.059578259s" podCreationTimestamp="2026-01-03 04:28:16 +0000 UTC" firstStartedPulling="2026-01-03 04:28:16.696179242 +0000 UTC m=+723.813232427" lastFinishedPulling="2026-01-03 04:28:21.915931755 +0000 UTC m=+729.032984950" observedRunningTime="2026-01-03 04:28:23.056226779 +0000 UTC m=+730.173279974" watchObservedRunningTime="2026-01-03 04:28:23.059578259 +0000 UTC m=+730.176631454" Jan 03 04:28:26 crc kubenswrapper[4865]: I0103 04:28:26.541257 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hvkxm" Jan 03 04:28:26 crc kubenswrapper[4865]: I0103 04:28:26.822183 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:26 crc kubenswrapper[4865]: I0103 04:28:26.822255 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:26 crc kubenswrapper[4865]: I0103 04:28:26.829570 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:27 crc kubenswrapper[4865]: I0103 04:28:27.069846 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d9cbf9545-c5wbh" Jan 03 04:28:27 crc kubenswrapper[4865]: I0103 04:28:27.180139 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:28:37 crc kubenswrapper[4865]: I0103 04:28:37.084264 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-r6w2b" Jan 03 04:28:47 crc kubenswrapper[4865]: I0103 04:28:47.372877 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.234499 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cgxlq" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" containerID="cri-o://d51e1818eaa1c8c9cacdb1621a4a17c6ab2f5444c77125a501ed003a5e875b21" gracePeriod=15 Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.321143 4865 patch_prober.go:28] interesting pod/console-f9d7485db-cgxlq container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.321219 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-cgxlq" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.476873 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv"] Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.478907 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.483118 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.499047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv"] Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.590471 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.590592 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.590650 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bzn\" (UniqueName: \"kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.692690 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.692841 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.692911 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bzn\" (UniqueName: \"kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.693469 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.693778 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.719176 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bzn\" (UniqueName: \"kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:52 crc kubenswrapper[4865]: I0103 04:28:52.801704 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.086710 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv"] Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.306514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerStarted","Data":"bd68f87451541892d2c94bbef088ac4c0c094c7528cf2846adef6f112258b66b"} Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.306584 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerStarted","Data":"c978712f5a8d57798d50b47759495ff7219f3f81c6bfb49ffecf9d9319a2beba"} Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.310027 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgxlq_7a6e3a03-5fe2-4e65-a77e-da971c2dd666/console/0.log" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.310074 4865 generic.go:334] "Generic (PLEG): container finished" podID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerID="d51e1818eaa1c8c9cacdb1621a4a17c6ab2f5444c77125a501ed003a5e875b21" exitCode=2 Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.310099 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgxlq" event={"ID":"7a6e3a03-5fe2-4e65-a77e-da971c2dd666","Type":"ContainerDied","Data":"d51e1818eaa1c8c9cacdb1621a4a17c6ab2f5444c77125a501ed003a5e875b21"} Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.409732 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgxlq_7a6e3a03-5fe2-4e65-a77e-da971c2dd666/console/0.log" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.410034 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.501978 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502078 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502116 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502209 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502239 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502349 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502442 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6kph\" (UniqueName: \"kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph\") pod \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\" (UID: \"7a6e3a03-5fe2-4e65-a77e-da971c2dd666\") " Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.502906 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config" (OuterVolumeSpecName: "console-config") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.503210 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.503224 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca" (OuterVolumeSpecName: "service-ca") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.503363 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.508016 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.508766 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph" (OuterVolumeSpecName: "kube-api-access-p6kph") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "kube-api-access-p6kph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.515324 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7a6e3a03-5fe2-4e65-a77e-da971c2dd666" (UID: "7a6e3a03-5fe2-4e65-a77e-da971c2dd666"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603661 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6kph\" (UniqueName: \"kubernetes.io/projected/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-kube-api-access-p6kph\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603717 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603738 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603757 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603775 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603794 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:53 crc kubenswrapper[4865]: I0103 04:28:53.603811 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6e3a03-5fe2-4e65-a77e-da971c2dd666-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.322167 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2d5b470-3337-4613-937f-5e289519fc5a" containerID="bd68f87451541892d2c94bbef088ac4c0c094c7528cf2846adef6f112258b66b" exitCode=0 Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.322316 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerDied","Data":"bd68f87451541892d2c94bbef088ac4c0c094c7528cf2846adef6f112258b66b"} Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.326543 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cgxlq_7a6e3a03-5fe2-4e65-a77e-da971c2dd666/console/0.log" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.326609 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cgxlq" event={"ID":"7a6e3a03-5fe2-4e65-a77e-da971c2dd666","Type":"ContainerDied","Data":"1b32df479cfe8b2412d819a68204930dc57188e1687e9c5a9aaf3891cfe69b56"} Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.326663 4865 scope.go:117] "RemoveContainer" containerID="d51e1818eaa1c8c9cacdb1621a4a17c6ab2f5444c77125a501ed003a5e875b21" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.326742 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cgxlq" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.375717 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.382047 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cgxlq"] Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.836226 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:28:54 crc kubenswrapper[4865]: E0103 04:28:54.837013 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.837041 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.837235 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" containerName="console" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.838598 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.858976 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.923912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szwqg\" (UniqueName: \"kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.924137 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:54 crc kubenswrapper[4865]: I0103 04:28:54.924192 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.025949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szwqg\" (UniqueName: \"kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.026130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.026182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.027449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.027473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.060097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szwqg\" (UniqueName: \"kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg\") pod \"redhat-operators-k2mp7\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.163640 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6e3a03-5fe2-4e65-a77e-da971c2dd666" path="/var/lib/kubelet/pods/7a6e3a03-5fe2-4e65-a77e-da971c2dd666/volumes" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.176616 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:28:55 crc kubenswrapper[4865]: I0103 04:28:55.395166 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:28:55 crc kubenswrapper[4865]: W0103 04:28:55.444944 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7872b870_707b_4b22_84d1_e591c2c086d1.slice/crio-f3987d9e887e03879ca5f358ab4fb50aea31def5463289247fce1c411e6e0e34 WatchSource:0}: Error finding container f3987d9e887e03879ca5f358ab4fb50aea31def5463289247fce1c411e6e0e34: Status 404 returned error can't find the container with id f3987d9e887e03879ca5f358ab4fb50aea31def5463289247fce1c411e6e0e34 Jan 03 04:28:56 crc kubenswrapper[4865]: I0103 04:28:56.354502 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2d5b470-3337-4613-937f-5e289519fc5a" containerID="a8c35202ba7c1d557eb6e591544e7c931c64267373c9f9edb6af2ffeabb9b59d" exitCode=0 Jan 03 04:28:56 crc kubenswrapper[4865]: I0103 04:28:56.354546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerDied","Data":"a8c35202ba7c1d557eb6e591544e7c931c64267373c9f9edb6af2ffeabb9b59d"} Jan 03 04:28:56 crc kubenswrapper[4865]: I0103 04:28:56.357027 4865 generic.go:334] "Generic (PLEG): container finished" podID="7872b870-707b-4b22-84d1-e591c2c086d1" containerID="c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b" exitCode=0 Jan 03 04:28:56 crc kubenswrapper[4865]: I0103 04:28:56.357055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerDied","Data":"c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b"} Jan 03 04:28:56 crc kubenswrapper[4865]: I0103 04:28:56.357073 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerStarted","Data":"f3987d9e887e03879ca5f358ab4fb50aea31def5463289247fce1c411e6e0e34"} Jan 03 04:28:57 crc kubenswrapper[4865]: I0103 04:28:57.365455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerStarted","Data":"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661"} Jan 03 04:28:57 crc kubenswrapper[4865]: I0103 04:28:57.370919 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2d5b470-3337-4613-937f-5e289519fc5a" containerID="b47adb6cd7db849cae4ee6ee51cd892d6cea467697cd023692a2015b7bed8590" exitCode=0 Jan 03 04:28:57 crc kubenswrapper[4865]: I0103 04:28:57.370973 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerDied","Data":"b47adb6cd7db849cae4ee6ee51cd892d6cea467697cd023692a2015b7bed8590"} Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.379819 4865 generic.go:334] "Generic (PLEG): container finished" podID="7872b870-707b-4b22-84d1-e591c2c086d1" containerID="d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661" exitCode=0 Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.379906 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerDied","Data":"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661"} Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.685830 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.775791 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4bzn\" (UniqueName: \"kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn\") pod \"b2d5b470-3337-4613-937f-5e289519fc5a\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.775941 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle\") pod \"b2d5b470-3337-4613-937f-5e289519fc5a\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.776159 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util\") pod \"b2d5b470-3337-4613-937f-5e289519fc5a\" (UID: \"b2d5b470-3337-4613-937f-5e289519fc5a\") " Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.777921 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle" (OuterVolumeSpecName: "bundle") pod "b2d5b470-3337-4613-937f-5e289519fc5a" (UID: "b2d5b470-3337-4613-937f-5e289519fc5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.781842 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn" (OuterVolumeSpecName: "kube-api-access-g4bzn") pod "b2d5b470-3337-4613-937f-5e289519fc5a" (UID: "b2d5b470-3337-4613-937f-5e289519fc5a"). InnerVolumeSpecName "kube-api-access-g4bzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.877398 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:58 crc kubenswrapper[4865]: I0103 04:28:58.877498 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4bzn\" (UniqueName: \"kubernetes.io/projected/b2d5b470-3337-4613-937f-5e289519fc5a-kube-api-access-g4bzn\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.059705 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util" (OuterVolumeSpecName: "util") pod "b2d5b470-3337-4613-937f-5e289519fc5a" (UID: "b2d5b470-3337-4613-937f-5e289519fc5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.080244 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2d5b470-3337-4613-937f-5e289519fc5a-util\") on node \"crc\" DevicePath \"\"" Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.388517 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerStarted","Data":"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73"} Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.392074 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" event={"ID":"b2d5b470-3337-4613-937f-5e289519fc5a","Type":"ContainerDied","Data":"c978712f5a8d57798d50b47759495ff7219f3f81c6bfb49ffecf9d9319a2beba"} Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.392137 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv" Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.392152 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c978712f5a8d57798d50b47759495ff7219f3f81c6bfb49ffecf9d9319a2beba" Jan 03 04:28:59 crc kubenswrapper[4865]: I0103 04:28:59.414673 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2mp7" podStartSLOduration=2.687302754 podStartE2EDuration="5.414646913s" podCreationTimestamp="2026-01-03 04:28:54 +0000 UTC" firstStartedPulling="2026-01-03 04:28:56.358644811 +0000 UTC m=+763.475698046" lastFinishedPulling="2026-01-03 04:28:59.08598898 +0000 UTC m=+766.203042205" observedRunningTime="2026-01-03 04:28:59.407145511 +0000 UTC m=+766.524198716" watchObservedRunningTime="2026-01-03 04:28:59.414646913 +0000 UTC m=+766.531700138" Jan 03 04:29:05 crc kubenswrapper[4865]: I0103 04:29:05.176927 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:05 crc kubenswrapper[4865]: I0103 04:29:05.177406 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:06 crc kubenswrapper[4865]: I0103 04:29:06.231747 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2mp7" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="registry-server" probeResult="failure" output=< Jan 03 04:29:06 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 04:29:06 crc kubenswrapper[4865]: > Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.903495 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d"] Jan 03 04:29:07 crc kubenswrapper[4865]: E0103 04:29:07.903687 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="util" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.903697 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="util" Jan 03 04:29:07 crc kubenswrapper[4865]: E0103 04:29:07.903715 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="extract" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.903720 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="extract" Jan 03 04:29:07 crc kubenswrapper[4865]: E0103 04:29:07.903730 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="pull" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.903736 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="pull" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.903836 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d5b470-3337-4613-937f-5e289519fc5a" containerName="extract" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.904191 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.906156 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.909850 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.920808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xxbbj" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.920839 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.920909 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 03 04:29:07 crc kubenswrapper[4865]: I0103 04:29:07.925163 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d"] Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.004076 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-apiservice-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.004140 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-webhook-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.004213 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5f2m\" (UniqueName: \"kubernetes.io/projected/ca1c3cce-5140-4f1a-bd20-5d1111357543-kube-api-access-f5f2m\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.105227 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5f2m\" (UniqueName: \"kubernetes.io/projected/ca1c3cce-5140-4f1a-bd20-5d1111357543-kube-api-access-f5f2m\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.105294 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-apiservice-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.105328 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-webhook-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.113171 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-apiservice-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.113705 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca1c3cce-5140-4f1a-bd20-5d1111357543-webhook-cert\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.120983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5f2m\" (UniqueName: \"kubernetes.io/projected/ca1c3cce-5140-4f1a-bd20-5d1111357543-kube-api-access-f5f2m\") pod \"metallb-operator-controller-manager-8c8d45d7-vlx4d\" (UID: \"ca1c3cce-5140-4f1a-bd20-5d1111357543\") " pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.141849 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9"] Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.142505 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.144061 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.144370 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gjs6w" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.144446 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.164280 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9"] Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.206290 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-apiservice-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.206402 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8kqm\" (UniqueName: \"kubernetes.io/projected/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-kube-api-access-r8kqm\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.206460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-webhook-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.221924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.307336 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-webhook-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.307395 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-apiservice-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.307453 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8kqm\" (UniqueName: \"kubernetes.io/projected/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-kube-api-access-r8kqm\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.310642 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-webhook-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.311932 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-apiservice-cert\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.330051 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8kqm\" (UniqueName: \"kubernetes.io/projected/e1b7d25b-22b1-46fd-98e2-8f5de4dfac93-kube-api-access-r8kqm\") pod \"metallb-operator-webhook-server-9b69d945b-jqfc9\" (UID: \"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93\") " pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.458041 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d"] Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.463608 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:08 crc kubenswrapper[4865]: W0103 04:29:08.470973 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca1c3cce_5140_4f1a_bd20_5d1111357543.slice/crio-b67f02284975a229449a4bf4d10dfe513af859a04732621c6ae75a7d49d25c95 WatchSource:0}: Error finding container b67f02284975a229449a4bf4d10dfe513af859a04732621c6ae75a7d49d25c95: Status 404 returned error can't find the container with id b67f02284975a229449a4bf4d10dfe513af859a04732621c6ae75a7d49d25c95 Jan 03 04:29:08 crc kubenswrapper[4865]: I0103 04:29:08.707071 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9"] Jan 03 04:29:08 crc kubenswrapper[4865]: W0103 04:29:08.707903 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1b7d25b_22b1_46fd_98e2_8f5de4dfac93.slice/crio-d5694743b1f3dd51fa23c617a9cd46542cbdf48367be2437d32b6f99db2f92c9 WatchSource:0}: Error finding container d5694743b1f3dd51fa23c617a9cd46542cbdf48367be2437d32b6f99db2f92c9: Status 404 returned error can't find the container with id d5694743b1f3dd51fa23c617a9cd46542cbdf48367be2437d32b6f99db2f92c9 Jan 03 04:29:09 crc kubenswrapper[4865]: I0103 04:29:09.475738 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" event={"ID":"ca1c3cce-5140-4f1a-bd20-5d1111357543","Type":"ContainerStarted","Data":"b67f02284975a229449a4bf4d10dfe513af859a04732621c6ae75a7d49d25c95"} Jan 03 04:29:09 crc kubenswrapper[4865]: I0103 04:29:09.477493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" event={"ID":"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93","Type":"ContainerStarted","Data":"d5694743b1f3dd51fa23c617a9cd46542cbdf48367be2437d32b6f99db2f92c9"} Jan 03 04:29:10 crc kubenswrapper[4865]: I0103 04:29:10.739368 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:29:10 crc kubenswrapper[4865]: I0103 04:29:10.739714 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:29:15 crc kubenswrapper[4865]: I0103 04:29:15.250074 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:15 crc kubenswrapper[4865]: I0103 04:29:15.320573 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:15 crc kubenswrapper[4865]: I0103 04:29:15.490318 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.536183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" event={"ID":"e1b7d25b-22b1-46fd-98e2-8f5de4dfac93","Type":"ContainerStarted","Data":"2972d3993dc7e30247260497db622f9a8464bdd41bd3234a0d454cc4950cf4e8"} Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.536626 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.537736 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2mp7" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="registry-server" containerID="cri-o://5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73" gracePeriod=2 Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.538062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" event={"ID":"ca1c3cce-5140-4f1a-bd20-5d1111357543","Type":"ContainerStarted","Data":"c09e4ae75b50cb25ff95ae348729c053d656a2bd1a83c1b2d1bf7edd7097375f"} Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.538359 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.569841 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" podStartSLOduration=1.81308392 podStartE2EDuration="8.569819376s" podCreationTimestamp="2026-01-03 04:29:08 +0000 UTC" firstStartedPulling="2026-01-03 04:29:08.711213493 +0000 UTC m=+775.828266678" lastFinishedPulling="2026-01-03 04:29:15.467948959 +0000 UTC m=+782.585002134" observedRunningTime="2026-01-03 04:29:16.565027635 +0000 UTC m=+783.682080860" watchObservedRunningTime="2026-01-03 04:29:16.569819376 +0000 UTC m=+783.686872571" Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.596577 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" podStartSLOduration=2.658997496 podStartE2EDuration="9.596560801s" podCreationTimestamp="2026-01-03 04:29:07 +0000 UTC" firstStartedPulling="2026-01-03 04:29:08.47346405 +0000 UTC m=+775.590517225" lastFinishedPulling="2026-01-03 04:29:15.411027345 +0000 UTC m=+782.528080530" observedRunningTime="2026-01-03 04:29:16.595215535 +0000 UTC m=+783.712268750" watchObservedRunningTime="2026-01-03 04:29:16.596560801 +0000 UTC m=+783.713613986" Jan 03 04:29:16 crc kubenswrapper[4865]: I0103 04:29:16.931954 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.038097 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities\") pod \"7872b870-707b-4b22-84d1-e591c2c086d1\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.038223 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content\") pod \"7872b870-707b-4b22-84d1-e591c2c086d1\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.038287 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szwqg\" (UniqueName: \"kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg\") pod \"7872b870-707b-4b22-84d1-e591c2c086d1\" (UID: \"7872b870-707b-4b22-84d1-e591c2c086d1\") " Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.041729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities" (OuterVolumeSpecName: "utilities") pod "7872b870-707b-4b22-84d1-e591c2c086d1" (UID: "7872b870-707b-4b22-84d1-e591c2c086d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.054630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg" (OuterVolumeSpecName: "kube-api-access-szwqg") pod "7872b870-707b-4b22-84d1-e591c2c086d1" (UID: "7872b870-707b-4b22-84d1-e591c2c086d1"). InnerVolumeSpecName "kube-api-access-szwqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.140276 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.140323 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szwqg\" (UniqueName: \"kubernetes.io/projected/7872b870-707b-4b22-84d1-e591c2c086d1-kube-api-access-szwqg\") on node \"crc\" DevicePath \"\"" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.177855 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7872b870-707b-4b22-84d1-e591c2c086d1" (UID: "7872b870-707b-4b22-84d1-e591c2c086d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.241655 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7872b870-707b-4b22-84d1-e591c2c086d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.546633 4865 generic.go:334] "Generic (PLEG): container finished" podID="7872b870-707b-4b22-84d1-e591c2c086d1" containerID="5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73" exitCode=0 Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.546709 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2mp7" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.546729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerDied","Data":"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73"} Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.546790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2mp7" event={"ID":"7872b870-707b-4b22-84d1-e591c2c086d1","Type":"ContainerDied","Data":"f3987d9e887e03879ca5f358ab4fb50aea31def5463289247fce1c411e6e0e34"} Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.546818 4865 scope.go:117] "RemoveContainer" containerID="5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.564896 4865 scope.go:117] "RemoveContainer" containerID="d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.579059 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.584436 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2mp7"] Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.587826 4865 scope.go:117] "RemoveContainer" containerID="c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.612453 4865 scope.go:117] "RemoveContainer" containerID="5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73" Jan 03 04:29:17 crc kubenswrapper[4865]: E0103 04:29:17.612936 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73\": container with ID starting with 5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73 not found: ID does not exist" containerID="5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.613003 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73"} err="failed to get container status \"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73\": rpc error: code = NotFound desc = could not find container \"5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73\": container with ID starting with 5d721c76adbea0bf9b82890628cd0de84ee5820e287b1c62e134909c1bab1c73 not found: ID does not exist" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.613034 4865 scope.go:117] "RemoveContainer" containerID="d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661" Jan 03 04:29:17 crc kubenswrapper[4865]: E0103 04:29:17.613524 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661\": container with ID starting with d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661 not found: ID does not exist" containerID="d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.613577 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661"} err="failed to get container status \"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661\": rpc error: code = NotFound desc = could not find container \"d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661\": container with ID starting with d39f557b9f11b9cc51a1fa323c8c5aebde067c9bb7b96c73120fcd8d74ea6661 not found: ID does not exist" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.613610 4865 scope.go:117] "RemoveContainer" containerID="c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b" Jan 03 04:29:17 crc kubenswrapper[4865]: E0103 04:29:17.613961 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b\": container with ID starting with c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b not found: ID does not exist" containerID="c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b" Jan 03 04:29:17 crc kubenswrapper[4865]: I0103 04:29:17.613990 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b"} err="failed to get container status \"c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b\": rpc error: code = NotFound desc = could not find container \"c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b\": container with ID starting with c0920d2bbd8e02639a681c9049802960c662430637c2cea2b9d54a1743a5e82b not found: ID does not exist" Jan 03 04:29:19 crc kubenswrapper[4865]: I0103 04:29:19.169126 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" path="/var/lib/kubelet/pods/7872b870-707b-4b22-84d1-e591c2c086d1/volumes" Jan 03 04:29:28 crc kubenswrapper[4865]: I0103 04:29:28.468138 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-9b69d945b-jqfc9" Jan 03 04:29:40 crc kubenswrapper[4865]: I0103 04:29:40.739796 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:29:40 crc kubenswrapper[4865]: I0103 04:29:40.740507 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:29:48 crc kubenswrapper[4865]: I0103 04:29:48.225576 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8c8d45d7-vlx4d" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.048326 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9d8bc"] Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.048544 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="extract-content" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.048554 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="extract-content" Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.048577 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="extract-utilities" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.048583 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="extract-utilities" Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.048591 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="registry-server" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.048597 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="registry-server" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.048683 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7872b870-707b-4b22-84d1-e591c2c086d1" containerName="registry-server" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.050415 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.053550 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.054582 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.059821 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jszj8" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.061978 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j"] Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.062797 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.065158 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.076959 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j"] Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.133602 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jhgwc"] Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.134433 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.138963 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.139003 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-r7d2k" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.138965 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.139087 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.140410 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-g62v9"] Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.141223 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.142950 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.152196 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-g62v9"] Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168292 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-conf\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168337 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-metrics\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168357 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-sockets\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbsq\" (UniqueName: \"kubernetes.io/projected/78972b37-6300-455d-8b5d-7a2dbefa88f3-kube-api-access-2mbsq\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168470 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78972b37-6300-455d-8b5d-7a2dbefa88f3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168494 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4309435e-08d9-4379-895f-d297474cd646-frr-startup\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168512 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlxz\" (UniqueName: \"kubernetes.io/projected/4309435e-08d9-4379-895f-d297474cd646-kube-api-access-swlxz\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168531 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-reloader\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.168561 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4309435e-08d9-4379-895f-d297474cd646-metrics-certs\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270102 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-cert\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4309435e-08d9-4379-895f-d297474cd646-metrics-certs\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-conf\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270213 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3ea65f95-887b-447b-b582-c1e91cdf44eb-metallb-excludel2\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270330 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-metrics\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-sockets\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbsq\" (UniqueName: \"kubernetes.io/projected/78972b37-6300-455d-8b5d-7a2dbefa88f3-kube-api-access-2mbsq\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270506 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78972b37-6300-455d-8b5d-7a2dbefa88f3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270535 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthjz\" (UniqueName: \"kubernetes.io/projected/3ea65f95-887b-447b-b582-c1e91cdf44eb-kube-api-access-vthjz\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270592 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwbt\" (UniqueName: \"kubernetes.io/projected/6f9c0210-2934-4cc6-aec8-f91055a4e30d-kube-api-access-9cwbt\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4309435e-08d9-4379-895f-d297474cd646-frr-startup\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-metrics-certs\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlxz\" (UniqueName: \"kubernetes.io/projected/4309435e-08d9-4379-895f-d297474cd646-kube-api-access-swlxz\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-reloader\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.270872 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-metrics\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.271008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-reloader\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.271294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-conf\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.271357 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4309435e-08d9-4379-895f-d297474cd646-frr-sockets\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.271859 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4309435e-08d9-4379-895f-d297474cd646-frr-startup\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.278056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78972b37-6300-455d-8b5d-7a2dbefa88f3-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.283172 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4309435e-08d9-4379-895f-d297474cd646-metrics-certs\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.285622 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbsq\" (UniqueName: \"kubernetes.io/projected/78972b37-6300-455d-8b5d-7a2dbefa88f3-kube-api-access-2mbsq\") pod \"frr-k8s-webhook-server-7784b6fcf-88j8j\" (UID: \"78972b37-6300-455d-8b5d-7a2dbefa88f3\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.293315 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlxz\" (UniqueName: \"kubernetes.io/projected/4309435e-08d9-4379-895f-d297474cd646-kube-api-access-swlxz\") pod \"frr-k8s-9d8bc\" (UID: \"4309435e-08d9-4379-895f-d297474cd646\") " pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.365345 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.371859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthjz\" (UniqueName: \"kubernetes.io/projected/3ea65f95-887b-447b-b582-c1e91cdf44eb-kube-api-access-vthjz\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.371909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cwbt\" (UniqueName: \"kubernetes.io/projected/6f9c0210-2934-4cc6-aec8-f91055a4e30d-kube-api-access-9cwbt\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.371931 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-metrics-certs\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.371971 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-cert\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.372001 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3ea65f95-887b-447b-b582-c1e91cdf44eb-metallb-excludel2\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.372017 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.372051 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.372121 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.372158 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist podName:3ea65f95-887b-447b-b582-c1e91cdf44eb nodeName:}" failed. No retries permitted until 2026-01-03 04:29:49.872143639 +0000 UTC m=+816.989196824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist") pod "speaker-jhgwc" (UID: "3ea65f95-887b-447b-b582-c1e91cdf44eb") : secret "metallb-memberlist" not found Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.372352 4865 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.372499 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs podName:3ea65f95-887b-447b-b582-c1e91cdf44eb nodeName:}" failed. No retries permitted until 2026-01-03 04:29:49.872482119 +0000 UTC m=+816.989535304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs") pod "speaker-jhgwc" (UID: "3ea65f95-887b-447b-b582-c1e91cdf44eb") : secret "speaker-certs-secret" not found Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.373084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3ea65f95-887b-447b-b582-c1e91cdf44eb-metallb-excludel2\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.375866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-metrics-certs\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.375921 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.376217 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.389686 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f9c0210-2934-4cc6-aec8-f91055a4e30d-cert\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.389913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cwbt\" (UniqueName: \"kubernetes.io/projected/6f9c0210-2934-4cc6-aec8-f91055a4e30d-kube-api-access-9cwbt\") pod \"controller-5bddd4b946-g62v9\" (UID: \"6f9c0210-2934-4cc6-aec8-f91055a4e30d\") " pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.399217 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthjz\" (UniqueName: \"kubernetes.io/projected/3ea65f95-887b-447b-b582-c1e91cdf44eb-kube-api-access-vthjz\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.468921 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.728790 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-g62v9"] Jan 03 04:29:49 crc kubenswrapper[4865]: W0103 04:29:49.732789 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f9c0210_2934_4cc6_aec8_f91055a4e30d.slice/crio-7759006771d0ee1c00a0628660546df7d0d62da59ad1b90bc7013fe4ff7f3fe8 WatchSource:0}: Error finding container 7759006771d0ee1c00a0628660546df7d0d62da59ad1b90bc7013fe4ff7f3fe8: Status 404 returned error can't find the container with id 7759006771d0ee1c00a0628660546df7d0d62da59ad1b90bc7013fe4ff7f3fe8 Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.753096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g62v9" event={"ID":"6f9c0210-2934-4cc6-aec8-f91055a4e30d","Type":"ContainerStarted","Data":"7759006771d0ee1c00a0628660546df7d0d62da59ad1b90bc7013fe4ff7f3fe8"} Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.862967 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j"] Jan 03 04:29:49 crc kubenswrapper[4865]: W0103 04:29:49.868827 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78972b37_6300_455d_8b5d_7a2dbefa88f3.slice/crio-d7c0fb6afb4671c6bc2474c827b8e0198bdb12ff8f6db4af11328acda78a6e48 WatchSource:0}: Error finding container d7c0fb6afb4671c6bc2474c827b8e0198bdb12ff8f6db4af11328acda78a6e48: Status 404 returned error can't find the container with id d7c0fb6afb4671c6bc2474c827b8e0198bdb12ff8f6db4af11328acda78a6e48 Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.881739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.882189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.882458 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 04:29:49 crc kubenswrapper[4865]: E0103 04:29:49.882551 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist podName:3ea65f95-887b-447b-b582-c1e91cdf44eb nodeName:}" failed. No retries permitted until 2026-01-03 04:29:50.882502267 +0000 UTC m=+817.999555472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist") pod "speaker-jhgwc" (UID: "3ea65f95-887b-447b-b582-c1e91cdf44eb") : secret "metallb-memberlist" not found Jan 03 04:29:49 crc kubenswrapper[4865]: I0103 04:29:49.894971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-metrics-certs\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.769479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g62v9" event={"ID":"6f9c0210-2934-4cc6-aec8-f91055a4e30d","Type":"ContainerStarted","Data":"bff0e0a08ad69208c74951b691344912bde4908c88fe66df016420e4d7b04d0d"} Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.769570 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g62v9" event={"ID":"6f9c0210-2934-4cc6-aec8-f91055a4e30d","Type":"ContainerStarted","Data":"c1274ad2d32e6e4bbbeb98468f2fa57d2b0374f88012fb5e7f1b3b6cc24b2577"} Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.769709 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.771534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"cccadbdb6107abfaa360109ce9a1d8349c2fe826d1131a7c6af1988d66960d2e"} Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.776765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" event={"ID":"78972b37-6300-455d-8b5d-7a2dbefa88f3","Type":"ContainerStarted","Data":"d7c0fb6afb4671c6bc2474c827b8e0198bdb12ff8f6db4af11328acda78a6e48"} Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.802540 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-g62v9" podStartSLOduration=1.802461037 podStartE2EDuration="1.802461037s" podCreationTimestamp="2026-01-03 04:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:29:50.801404138 +0000 UTC m=+817.918457323" watchObservedRunningTime="2026-01-03 04:29:50.802461037 +0000 UTC m=+817.919514272" Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.896863 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.918026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3ea65f95-887b-447b-b582-c1e91cdf44eb-memberlist\") pod \"speaker-jhgwc\" (UID: \"3ea65f95-887b-447b-b582-c1e91cdf44eb\") " pod="metallb-system/speaker-jhgwc" Jan 03 04:29:50 crc kubenswrapper[4865]: I0103 04:29:50.955100 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhgwc" Jan 03 04:29:51 crc kubenswrapper[4865]: I0103 04:29:51.784483 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhgwc" event={"ID":"3ea65f95-887b-447b-b582-c1e91cdf44eb","Type":"ContainerStarted","Data":"31a088ab755d777c0d03ad820ae8b093ac2ba8d0ae886cadc2879eb95861061b"} Jan 03 04:29:51 crc kubenswrapper[4865]: I0103 04:29:51.784765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhgwc" event={"ID":"3ea65f95-887b-447b-b582-c1e91cdf44eb","Type":"ContainerStarted","Data":"5df1f66e59e0932d40c573dede0a17bc772ecd07f79c29857cbe4f52d609e794"} Jan 03 04:29:51 crc kubenswrapper[4865]: I0103 04:29:51.784776 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhgwc" event={"ID":"3ea65f95-887b-447b-b582-c1e91cdf44eb","Type":"ContainerStarted","Data":"f33aaeb703baa6779ed85860aba05b8e4d3b72c7f4681cb007492240e586ccb7"} Jan 03 04:29:51 crc kubenswrapper[4865]: I0103 04:29:51.784964 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jhgwc" Jan 03 04:29:51 crc kubenswrapper[4865]: I0103 04:29:51.799941 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jhgwc" podStartSLOduration=2.7999214 podStartE2EDuration="2.7999214s" podCreationTimestamp="2026-01-03 04:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:29:51.797735681 +0000 UTC m=+818.914788866" watchObservedRunningTime="2026-01-03 04:29:51.7999214 +0000 UTC m=+818.916974585" Jan 03 04:29:56 crc kubenswrapper[4865]: I0103 04:29:56.829061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" event={"ID":"78972b37-6300-455d-8b5d-7a2dbefa88f3","Type":"ContainerStarted","Data":"404f350631931ae7c1364745adad2a0b476c1366bc6a947afd7a68433edbb45b"} Jan 03 04:29:56 crc kubenswrapper[4865]: I0103 04:29:56.829995 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:29:56 crc kubenswrapper[4865]: I0103 04:29:56.831503 4865 generic.go:334] "Generic (PLEG): container finished" podID="4309435e-08d9-4379-895f-d297474cd646" containerID="edce4cd01ad5a8efc2eaffc79a88904adff8cf4149a039fec93ff4dbb5e08dbb" exitCode=0 Jan 03 04:29:56 crc kubenswrapper[4865]: I0103 04:29:56.831565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerDied","Data":"edce4cd01ad5a8efc2eaffc79a88904adff8cf4149a039fec93ff4dbb5e08dbb"} Jan 03 04:29:56 crc kubenswrapper[4865]: I0103 04:29:56.856728 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" podStartSLOduration=1.169813102 podStartE2EDuration="7.856696173s" podCreationTimestamp="2026-01-03 04:29:49 +0000 UTC" firstStartedPulling="2026-01-03 04:29:49.872354911 +0000 UTC m=+816.989408096" lastFinishedPulling="2026-01-03 04:29:56.559237942 +0000 UTC m=+823.676291167" observedRunningTime="2026-01-03 04:29:56.855037498 +0000 UTC m=+823.972090713" watchObservedRunningTime="2026-01-03 04:29:56.856696173 +0000 UTC m=+823.973749408" Jan 03 04:29:58 crc kubenswrapper[4865]: I0103 04:29:58.849787 4865 generic.go:334] "Generic (PLEG): container finished" podID="4309435e-08d9-4379-895f-d297474cd646" containerID="b2055e8edaa41bfd0244143ef771bcc3fb35117546c8ba880975280f1bf04a7e" exitCode=0 Jan 03 04:29:58 crc kubenswrapper[4865]: I0103 04:29:58.849853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerDied","Data":"b2055e8edaa41bfd0244143ef771bcc3fb35117546c8ba880975280f1bf04a7e"} Jan 03 04:29:59 crc kubenswrapper[4865]: I0103 04:29:59.860803 4865 generic.go:334] "Generic (PLEG): container finished" podID="4309435e-08d9-4379-895f-d297474cd646" containerID="dd016ef6c7707ea9feaeddab60db4a5ff96d4326a0bab6db6bcf71801ae755de" exitCode=0 Jan 03 04:29:59 crc kubenswrapper[4865]: I0103 04:29:59.860870 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerDied","Data":"dd016ef6c7707ea9feaeddab60db4a5ff96d4326a0bab6db6bcf71801ae755de"} Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.164263 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n"] Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.165689 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.167949 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.168147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.178654 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n"] Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.221204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.221430 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.221476 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4m6\" (UniqueName: \"kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.332878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.332930 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x4m6\" (UniqueName: \"kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.333007 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.336805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.350705 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.371632 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x4m6\" (UniqueName: \"kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6\") pod \"collect-profiles-29456910-lzj6n\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.512608 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.871947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"590f89afc910ffafa86412d27062704b202a2f41d22862425ef2a01334137ced"} Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.872303 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"54f169f4959405acf3c2bc8cc75ebb1f06f17fef7233b2221e6d235e76b83dcd"} Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.872317 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"c0bbeb749753bca94348ffaa0f26df990f77bd452eef4aa0455b6ff96b09e089"} Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.872329 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"307fab08a4f56bb5f85ae9aff19e6a7f680b9a32b28547353bd3010591022a18"} Jan 03 04:30:00 crc kubenswrapper[4865]: I0103 04:30:00.938075 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n"] Jan 03 04:30:00 crc kubenswrapper[4865]: W0103 04:30:00.948296 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d6921c8_ce94_444d_9597_6f73851c6c95.slice/crio-d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b WatchSource:0}: Error finding container d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b: Status 404 returned error can't find the container with id d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.881530 4865 generic.go:334] "Generic (PLEG): container finished" podID="8d6921c8-ce94-444d-9597-6f73851c6c95" containerID="d44ab7d2fc18e12f3dfa8fc117444e02d3c2a68d0eb4ac4db8f5a440b2078931" exitCode=0 Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.881615 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" event={"ID":"8d6921c8-ce94-444d-9597-6f73851c6c95","Type":"ContainerDied","Data":"d44ab7d2fc18e12f3dfa8fc117444e02d3c2a68d0eb4ac4db8f5a440b2078931"} Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.881956 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" event={"ID":"8d6921c8-ce94-444d-9597-6f73851c6c95","Type":"ContainerStarted","Data":"d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b"} Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.890269 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"89312fb8496129b73fd7772cf81b173b2d39868bbe74e6e66b55c47cab7005bf"} Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.890325 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9d8bc" event={"ID":"4309435e-08d9-4379-895f-d297474cd646","Type":"ContainerStarted","Data":"e117210b10917354d5f4e14c55323c546aedcdd0602b9caed9dc2c32c949eab6"} Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.890998 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:30:01 crc kubenswrapper[4865]: I0103 04:30:01.937525 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9d8bc" podStartSLOduration=6.383746928 podStartE2EDuration="12.937503807s" podCreationTimestamp="2026-01-03 04:29:49 +0000 UTC" firstStartedPulling="2026-01-03 04:29:50.00722033 +0000 UTC m=+817.124273515" lastFinishedPulling="2026-01-03 04:29:56.560977169 +0000 UTC m=+823.678030394" observedRunningTime="2026-01-03 04:30:01.933114188 +0000 UTC m=+829.050167383" watchObservedRunningTime="2026-01-03 04:30:01.937503807 +0000 UTC m=+829.054557002" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.209880 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.275117 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume\") pod \"8d6921c8-ce94-444d-9597-6f73851c6c95\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.275239 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume\") pod \"8d6921c8-ce94-444d-9597-6f73851c6c95\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.275282 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x4m6\" (UniqueName: \"kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6\") pod \"8d6921c8-ce94-444d-9597-6f73851c6c95\" (UID: \"8d6921c8-ce94-444d-9597-6f73851c6c95\") " Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.275931 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d6921c8-ce94-444d-9597-6f73851c6c95" (UID: "8d6921c8-ce94-444d-9597-6f73851c6c95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.280642 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d6921c8-ce94-444d-9597-6f73851c6c95" (UID: "8d6921c8-ce94-444d-9597-6f73851c6c95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.280907 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6" (OuterVolumeSpecName: "kube-api-access-8x4m6") pod "8d6921c8-ce94-444d-9597-6f73851c6c95" (UID: "8d6921c8-ce94-444d-9597-6f73851c6c95"). InnerVolumeSpecName "kube-api-access-8x4m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.377131 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6921c8-ce94-444d-9597-6f73851c6c95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.377173 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x4m6\" (UniqueName: \"kubernetes.io/projected/8d6921c8-ce94-444d-9597-6f73851c6c95-kube-api-access-8x4m6\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.377184 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6921c8-ce94-444d-9597-6f73851c6c95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.905786 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" event={"ID":"8d6921c8-ce94-444d-9597-6f73851c6c95","Type":"ContainerDied","Data":"d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b"} Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.905820 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e4c480d127358e40c4ff9750ad238e5a3adf497185de453ec4c82011cb5c5b" Jan 03 04:30:03 crc kubenswrapper[4865]: I0103 04:30:03.905862 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n" Jan 03 04:30:04 crc kubenswrapper[4865]: I0103 04:30:04.366145 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:30:04 crc kubenswrapper[4865]: I0103 04:30:04.415496 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:30:09 crc kubenswrapper[4865]: I0103 04:30:09.389009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-88j8j" Jan 03 04:30:09 crc kubenswrapper[4865]: I0103 04:30:09.476220 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-g62v9" Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.739433 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.740431 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.740696 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.741689 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.741944 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8" gracePeriod=600 Jan 03 04:30:10 crc kubenswrapper[4865]: I0103 04:30:10.959657 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jhgwc" Jan 03 04:30:11 crc kubenswrapper[4865]: I0103 04:30:11.969443 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8" exitCode=0 Jan 03 04:30:11 crc kubenswrapper[4865]: I0103 04:30:11.969548 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8"} Jan 03 04:30:11 crc kubenswrapper[4865]: I0103 04:30:11.969713 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591"} Jan 03 04:30:11 crc kubenswrapper[4865]: I0103 04:30:11.969732 4865 scope.go:117] "RemoveContainer" containerID="60deac40539593b8b14a3c569523707488c15ac6fa42425acabd24f1e426fa4c" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.090813 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:14 crc kubenswrapper[4865]: E0103 04:30:14.091686 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6921c8-ce94-444d-9597-6f73851c6c95" containerName="collect-profiles" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.091734 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6921c8-ce94-444d-9597-6f73851c6c95" containerName="collect-profiles" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.091991 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6921c8-ce94-444d-9597-6f73851c6c95" containerName="collect-profiles" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.092773 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.098211 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-p2twh" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.102809 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.103249 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.118865 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.135806 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvgtj\" (UniqueName: \"kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj\") pod \"openstack-operator-index-sh46x\" (UID: \"d1b494b8-fa96-4a50-8e6f-487b89727dc3\") " pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.237356 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvgtj\" (UniqueName: \"kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj\") pod \"openstack-operator-index-sh46x\" (UID: \"d1b494b8-fa96-4a50-8e6f-487b89727dc3\") " pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.256735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvgtj\" (UniqueName: \"kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj\") pod \"openstack-operator-index-sh46x\" (UID: \"d1b494b8-fa96-4a50-8e6f-487b89727dc3\") " pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.414661 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.722868 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:14 crc kubenswrapper[4865]: W0103 04:30:14.727614 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b494b8_fa96_4a50_8e6f_487b89727dc3.slice/crio-8e4244fcf6c187e7a558adf983348464109a115a82dd65f285493fd262f2b728 WatchSource:0}: Error finding container 8e4244fcf6c187e7a558adf983348464109a115a82dd65f285493fd262f2b728: Status 404 returned error can't find the container with id 8e4244fcf6c187e7a558adf983348464109a115a82dd65f285493fd262f2b728 Jan 03 04:30:14 crc kubenswrapper[4865]: I0103 04:30:14.999636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sh46x" event={"ID":"d1b494b8-fa96-4a50-8e6f-487b89727dc3","Type":"ContainerStarted","Data":"8e4244fcf6c187e7a558adf983348464109a115a82dd65f285493fd262f2b728"} Jan 03 04:30:17 crc kubenswrapper[4865]: I0103 04:30:17.021165 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sh46x" event={"ID":"d1b494b8-fa96-4a50-8e6f-487b89727dc3","Type":"ContainerStarted","Data":"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099"} Jan 03 04:30:17 crc kubenswrapper[4865]: I0103 04:30:17.046243 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sh46x" podStartSLOduration=1.247948967 podStartE2EDuration="3.046196187s" podCreationTimestamp="2026-01-03 04:30:14 +0000 UTC" firstStartedPulling="2026-01-03 04:30:14.730773914 +0000 UTC m=+841.847827109" lastFinishedPulling="2026-01-03 04:30:16.529021104 +0000 UTC m=+843.646074329" observedRunningTime="2026-01-03 04:30:17.043952296 +0000 UTC m=+844.161005531" watchObservedRunningTime="2026-01-03 04:30:17.046196187 +0000 UTC m=+844.163249412" Jan 03 04:30:17 crc kubenswrapper[4865]: I0103 04:30:17.458565 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.063628 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m27bl"] Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.064991 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.078573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m27bl"] Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.194452 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvmk\" (UniqueName: \"kubernetes.io/projected/d69f6b83-1ca1-48a6-b701-033533fe63d0-kube-api-access-gtvmk\") pod \"openstack-operator-index-m27bl\" (UID: \"d69f6b83-1ca1-48a6-b701-033533fe63d0\") " pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.296231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtvmk\" (UniqueName: \"kubernetes.io/projected/d69f6b83-1ca1-48a6-b701-033533fe63d0-kube-api-access-gtvmk\") pod \"openstack-operator-index-m27bl\" (UID: \"d69f6b83-1ca1-48a6-b701-033533fe63d0\") " pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.328309 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtvmk\" (UniqueName: \"kubernetes.io/projected/d69f6b83-1ca1-48a6-b701-033533fe63d0-kube-api-access-gtvmk\") pod \"openstack-operator-index-m27bl\" (UID: \"d69f6b83-1ca1-48a6-b701-033533fe63d0\") " pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.403216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:18 crc kubenswrapper[4865]: I0103 04:30:18.690924 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m27bl"] Jan 03 04:30:18 crc kubenswrapper[4865]: W0103 04:30:18.694830 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd69f6b83_1ca1_48a6_b701_033533fe63d0.slice/crio-58dc866484cf78dd3210d0d0babbee4aa60222f5a50de126083f8151e44988df WatchSource:0}: Error finding container 58dc866484cf78dd3210d0d0babbee4aa60222f5a50de126083f8151e44988df: Status 404 returned error can't find the container with id 58dc866484cf78dd3210d0d0babbee4aa60222f5a50de126083f8151e44988df Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.044474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m27bl" event={"ID":"d69f6b83-1ca1-48a6-b701-033533fe63d0","Type":"ContainerStarted","Data":"3b4d0a78180285dfc7ef1c73be28fa71a0052feb9b971b1f76fd1e997a2d6b1f"} Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.045061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m27bl" event={"ID":"d69f6b83-1ca1-48a6-b701-033533fe63d0","Type":"ContainerStarted","Data":"58dc866484cf78dd3210d0d0babbee4aa60222f5a50de126083f8151e44988df"} Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.044586 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sh46x" podUID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" containerName="registry-server" containerID="cri-o://c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099" gracePeriod=2 Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.118534 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m27bl" podStartSLOduration=0.945911261 podStartE2EDuration="1.118515654s" podCreationTimestamp="2026-01-03 04:30:18 +0000 UTC" firstStartedPulling="2026-01-03 04:30:18.701481638 +0000 UTC m=+845.818534853" lastFinishedPulling="2026-01-03 04:30:18.874086031 +0000 UTC m=+845.991139246" observedRunningTime="2026-01-03 04:30:19.115243634 +0000 UTC m=+846.232296829" watchObservedRunningTime="2026-01-03 04:30:19.118515654 +0000 UTC m=+846.235568849" Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.369538 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9d8bc" Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.493245 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.620256 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvgtj\" (UniqueName: \"kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj\") pod \"d1b494b8-fa96-4a50-8e6f-487b89727dc3\" (UID: \"d1b494b8-fa96-4a50-8e6f-487b89727dc3\") " Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.629758 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj" (OuterVolumeSpecName: "kube-api-access-tvgtj") pod "d1b494b8-fa96-4a50-8e6f-487b89727dc3" (UID: "d1b494b8-fa96-4a50-8e6f-487b89727dc3"). InnerVolumeSpecName "kube-api-access-tvgtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:30:19 crc kubenswrapper[4865]: I0103 04:30:19.722310 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvgtj\" (UniqueName: \"kubernetes.io/projected/d1b494b8-fa96-4a50-8e6f-487b89727dc3-kube-api-access-tvgtj\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.055551 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" containerID="c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099" exitCode=0 Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.055621 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sh46x" Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.056847 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sh46x" event={"ID":"d1b494b8-fa96-4a50-8e6f-487b89727dc3","Type":"ContainerDied","Data":"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099"} Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.056961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sh46x" event={"ID":"d1b494b8-fa96-4a50-8e6f-487b89727dc3","Type":"ContainerDied","Data":"8e4244fcf6c187e7a558adf983348464109a115a82dd65f285493fd262f2b728"} Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.056994 4865 scope.go:117] "RemoveContainer" containerID="c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099" Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.078471 4865 scope.go:117] "RemoveContainer" containerID="c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099" Jan 03 04:30:20 crc kubenswrapper[4865]: E0103 04:30:20.079142 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099\": container with ID starting with c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099 not found: ID does not exist" containerID="c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099" Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.079174 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099"} err="failed to get container status \"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099\": rpc error: code = NotFound desc = could not find container \"c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099\": container with ID starting with c6aad9cab39cb35d4d8abd72d3b16a43086858527cb26daa35b2ff56387f6099 not found: ID does not exist" Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.089857 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:20 crc kubenswrapper[4865]: I0103 04:30:20.102593 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sh46x"] Jan 03 04:30:21 crc kubenswrapper[4865]: I0103 04:30:21.164854 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" path="/var/lib/kubelet/pods/d1b494b8-fa96-4a50-8e6f-487b89727dc3/volumes" Jan 03 04:30:28 crc kubenswrapper[4865]: I0103 04:30:28.403518 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:28 crc kubenswrapper[4865]: I0103 04:30:28.405666 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:28 crc kubenswrapper[4865]: I0103 04:30:28.448372 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:29 crc kubenswrapper[4865]: I0103 04:30:29.169533 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-m27bl" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.118899 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn"] Jan 03 04:30:30 crc kubenswrapper[4865]: E0103 04:30:30.119265 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" containerName="registry-server" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.119286 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" containerName="registry-server" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.119534 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b494b8-fa96-4a50-8e6f-487b89727dc3" containerName="registry-server" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.120921 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.127235 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bknfh" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.134695 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn"] Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.276061 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.276556 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.276837 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82bcp\" (UniqueName: \"kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.378194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82bcp\" (UniqueName: \"kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.378338 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.378434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.378981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.379198 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.414030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82bcp\" (UniqueName: \"kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp\") pod \"142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.500371 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:30 crc kubenswrapper[4865]: I0103 04:30:30.761659 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn"] Jan 03 04:30:30 crc kubenswrapper[4865]: W0103 04:30:30.772872 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd63c1299_0c01_4495_bff5_70ea344821dd.slice/crio-49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c WatchSource:0}: Error finding container 49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c: Status 404 returned error can't find the container with id 49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c Jan 03 04:30:31 crc kubenswrapper[4865]: I0103 04:30:31.143877 4865 generic.go:334] "Generic (PLEG): container finished" podID="d63c1299-0c01-4495-bff5-70ea344821dd" containerID="92870ac0acc68a09577a98aa339c2ed731f8e367f0e9248cabe8a708841942d9" exitCode=0 Jan 03 04:30:31 crc kubenswrapper[4865]: I0103 04:30:31.143998 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" event={"ID":"d63c1299-0c01-4495-bff5-70ea344821dd","Type":"ContainerDied","Data":"92870ac0acc68a09577a98aa339c2ed731f8e367f0e9248cabe8a708841942d9"} Jan 03 04:30:31 crc kubenswrapper[4865]: I0103 04:30:31.144354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" event={"ID":"d63c1299-0c01-4495-bff5-70ea344821dd","Type":"ContainerStarted","Data":"49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c"} Jan 03 04:30:32 crc kubenswrapper[4865]: I0103 04:30:32.155520 4865 generic.go:334] "Generic (PLEG): container finished" podID="d63c1299-0c01-4495-bff5-70ea344821dd" containerID="c03911782c287292a1029fc45d14e583aa1cbeb1d645dbaf242b11466a019006" exitCode=0 Jan 03 04:30:32 crc kubenswrapper[4865]: I0103 04:30:32.155600 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" event={"ID":"d63c1299-0c01-4495-bff5-70ea344821dd","Type":"ContainerDied","Data":"c03911782c287292a1029fc45d14e583aa1cbeb1d645dbaf242b11466a019006"} Jan 03 04:30:33 crc kubenswrapper[4865]: I0103 04:30:33.169236 4865 generic.go:334] "Generic (PLEG): container finished" podID="d63c1299-0c01-4495-bff5-70ea344821dd" containerID="0ef61722bc6298cd3e31c542b1e52292543c09744905fc400f4c2df19df4f5ca" exitCode=0 Jan 03 04:30:33 crc kubenswrapper[4865]: I0103 04:30:33.194553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" event={"ID":"d63c1299-0c01-4495-bff5-70ea344821dd","Type":"ContainerDied","Data":"0ef61722bc6298cd3e31c542b1e52292543c09744905fc400f4c2df19df4f5ca"} Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.563003 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.746573 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82bcp\" (UniqueName: \"kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp\") pod \"d63c1299-0c01-4495-bff5-70ea344821dd\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.746877 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util\") pod \"d63c1299-0c01-4495-bff5-70ea344821dd\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.746992 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle\") pod \"d63c1299-0c01-4495-bff5-70ea344821dd\" (UID: \"d63c1299-0c01-4495-bff5-70ea344821dd\") " Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.748163 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle" (OuterVolumeSpecName: "bundle") pod "d63c1299-0c01-4495-bff5-70ea344821dd" (UID: "d63c1299-0c01-4495-bff5-70ea344821dd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.755873 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp" (OuterVolumeSpecName: "kube-api-access-82bcp") pod "d63c1299-0c01-4495-bff5-70ea344821dd" (UID: "d63c1299-0c01-4495-bff5-70ea344821dd"). InnerVolumeSpecName "kube-api-access-82bcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.782228 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util" (OuterVolumeSpecName: "util") pod "d63c1299-0c01-4495-bff5-70ea344821dd" (UID: "d63c1299-0c01-4495-bff5-70ea344821dd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.850186 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-util\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.850284 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d63c1299-0c01-4495-bff5-70ea344821dd-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:34 crc kubenswrapper[4865]: I0103 04:30:34.850313 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82bcp\" (UniqueName: \"kubernetes.io/projected/d63c1299-0c01-4495-bff5-70ea344821dd-kube-api-access-82bcp\") on node \"crc\" DevicePath \"\"" Jan 03 04:30:35 crc kubenswrapper[4865]: I0103 04:30:35.190585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" event={"ID":"d63c1299-0c01-4495-bff5-70ea344821dd","Type":"ContainerDied","Data":"49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c"} Jan 03 04:30:35 crc kubenswrapper[4865]: I0103 04:30:35.190648 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49973872c74927d7d8179157ee2aeed8dc16a683e6950008c486885be443750c" Jan 03 04:30:35 crc kubenswrapper[4865]: I0103 04:30:35.190674 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.308752 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh"] Jan 03 04:30:42 crc kubenswrapper[4865]: E0103 04:30:42.309645 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="extract" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.309660 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="extract" Jan 03 04:30:42 crc kubenswrapper[4865]: E0103 04:30:42.309674 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="util" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.309683 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="util" Jan 03 04:30:42 crc kubenswrapper[4865]: E0103 04:30:42.309701 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="pull" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.309708 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="pull" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.309839 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63c1299-0c01-4495-bff5-70ea344821dd" containerName="extract" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.310282 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.315515 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-44hfq" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.331821 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh"] Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.473203 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjch\" (UniqueName: \"kubernetes.io/projected/b69ab3cd-b729-4ca3-83c2-989f7660d826-kube-api-access-cnjch\") pod \"openstack-operator-controller-operator-5954d5f7bc-c9qjh\" (UID: \"b69ab3cd-b729-4ca3-83c2-989f7660d826\") " pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.574588 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjch\" (UniqueName: \"kubernetes.io/projected/b69ab3cd-b729-4ca3-83c2-989f7660d826-kube-api-access-cnjch\") pod \"openstack-operator-controller-operator-5954d5f7bc-c9qjh\" (UID: \"b69ab3cd-b729-4ca3-83c2-989f7660d826\") " pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.606871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjch\" (UniqueName: \"kubernetes.io/projected/b69ab3cd-b729-4ca3-83c2-989f7660d826-kube-api-access-cnjch\") pod \"openstack-operator-controller-operator-5954d5f7bc-c9qjh\" (UID: \"b69ab3cd-b729-4ca3-83c2-989f7660d826\") " pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:42 crc kubenswrapper[4865]: I0103 04:30:42.629550 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:43 crc kubenswrapper[4865]: I0103 04:30:43.126047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh"] Jan 03 04:30:43 crc kubenswrapper[4865]: I0103 04:30:43.249603 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" event={"ID":"b69ab3cd-b729-4ca3-83c2-989f7660d826","Type":"ContainerStarted","Data":"f214443a1faebe0c33f4daec0a6bd47f55c3089ec08e1310b829ba1328315040"} Jan 03 04:30:47 crc kubenswrapper[4865]: I0103 04:30:47.275000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" event={"ID":"b69ab3cd-b729-4ca3-83c2-989f7660d826","Type":"ContainerStarted","Data":"807a779e17a7ccfaa83f176c40620728dae0699d8d7feda1918fb8ce5e807473"} Jan 03 04:30:47 crc kubenswrapper[4865]: I0103 04:30:47.275544 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:30:47 crc kubenswrapper[4865]: I0103 04:30:47.311529 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" podStartSLOduration=1.628487825 podStartE2EDuration="5.311506164s" podCreationTimestamp="2026-01-03 04:30:42 +0000 UTC" firstStartedPulling="2026-01-03 04:30:43.138760218 +0000 UTC m=+870.255813433" lastFinishedPulling="2026-01-03 04:30:46.821778587 +0000 UTC m=+873.938831772" observedRunningTime="2026-01-03 04:30:47.306306123 +0000 UTC m=+874.423359338" watchObservedRunningTime="2026-01-03 04:30:47.311506164 +0000 UTC m=+874.428559369" Jan 03 04:30:52 crc kubenswrapper[4865]: I0103 04:30:52.633249 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5954d5f7bc-c9qjh" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.618829 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.620571 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.635878 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.760218 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.760263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.760345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4jd\" (UniqueName: \"kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.861922 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.861982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.862069 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4jd\" (UniqueName: \"kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.862688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.862791 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.878276 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4jd\" (UniqueName: \"kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd\") pod \"certified-operators-zwzgn\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:02 crc kubenswrapper[4865]: I0103 04:31:02.937074 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:03 crc kubenswrapper[4865]: I0103 04:31:03.243365 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:03 crc kubenswrapper[4865]: I0103 04:31:03.396083 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerStarted","Data":"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85"} Jan 03 04:31:03 crc kubenswrapper[4865]: I0103 04:31:03.396376 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerStarted","Data":"7a4b79c45ea641ffc21e7fd82cc019440aca056fbf3b722a599d4034f42821c0"} Jan 03 04:31:04 crc kubenswrapper[4865]: I0103 04:31:04.404746 4865 generic.go:334] "Generic (PLEG): container finished" podID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerID="c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85" exitCode=0 Jan 03 04:31:04 crc kubenswrapper[4865]: I0103 04:31:04.404786 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerDied","Data":"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85"} Jan 03 04:31:04 crc kubenswrapper[4865]: I0103 04:31:04.404812 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerStarted","Data":"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664"} Jan 03 04:31:05 crc kubenswrapper[4865]: I0103 04:31:05.417429 4865 generic.go:334] "Generic (PLEG): container finished" podID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerID="f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664" exitCode=0 Jan 03 04:31:05 crc kubenswrapper[4865]: I0103 04:31:05.417527 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerDied","Data":"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664"} Jan 03 04:31:06 crc kubenswrapper[4865]: I0103 04:31:06.424737 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerStarted","Data":"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8"} Jan 03 04:31:06 crc kubenswrapper[4865]: I0103 04:31:06.453793 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zwzgn" podStartSLOduration=2.050446079 podStartE2EDuration="4.453777048s" podCreationTimestamp="2026-01-03 04:31:02 +0000 UTC" firstStartedPulling="2026-01-03 04:31:03.3973691 +0000 UTC m=+890.514422285" lastFinishedPulling="2026-01-03 04:31:05.800700029 +0000 UTC m=+892.917753254" observedRunningTime="2026-01-03 04:31:06.452325648 +0000 UTC m=+893.569378833" watchObservedRunningTime="2026-01-03 04:31:06.453777048 +0000 UTC m=+893.570830233" Jan 03 04:31:12 crc kubenswrapper[4865]: I0103 04:31:12.938303 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:12 crc kubenswrapper[4865]: I0103 04:31:12.940263 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:12 crc kubenswrapper[4865]: I0103 04:31:12.980420 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.292878 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.293691 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.296730 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-26wxx" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.310675 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.311591 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.320315 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-njpmg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.323784 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.324507 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.325816 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rk5wb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.329510 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.332238 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.340225 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.340882 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.341607 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.342809 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j6zsf" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.350867 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bkrgn" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.356159 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.376064 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psspt\" (UniqueName: \"kubernetes.io/projected/45a74d9c-8e20-4f90-ad8b-8e139ad592fd-kube-api-access-psspt\") pod \"glance-operator-controller-manager-7b549fc966-2lcpm\" (UID: \"45a74d9c-8e20-4f90-ad8b-8e139ad592fd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.376155 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq9s\" (UniqueName: \"kubernetes.io/projected/e4095c6a-c9c9-42c0-b79e-a4f467563d27-kube-api-access-4pq9s\") pod \"cinder-operator-controller-manager-78979fc445-pk2cc\" (UID: \"e4095c6a-c9c9-42c0-b79e-a4f467563d27\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.376208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxpc5\" (UniqueName: \"kubernetes.io/projected/675aef60-25dd-4113-a4cf-2f9b91a21150-kube-api-access-mxpc5\") pod \"barbican-operator-controller-manager-f6f74d6db-8p5g8\" (UID: \"675aef60-25dd-4113-a4cf-2f9b91a21150\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.376242 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzg4g\" (UniqueName: \"kubernetes.io/projected/8ed18047-e002-419d-b950-2535d4d778c1-kube-api-access-nzg4g\") pod \"heat-operator-controller-manager-658dd65b86-xk5t7\" (UID: \"8ed18047-e002-419d-b950-2535d4d778c1\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.376269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtss\" (UniqueName: \"kubernetes.io/projected/13c73aaf-30a7-4530-afff-39ec069fccde-kube-api-access-tmtss\") pod \"designate-operator-controller-manager-66f8b87655-sxcmb\" (UID: \"13c73aaf-30a7-4530-afff-39ec069fccde\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.377798 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.378506 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.381470 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kx9xh" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.392919 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-95md4"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.398279 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.402621 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5mxms" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.411522 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.437859 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.460357 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-95md4"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.466666 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485356 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzg4g\" (UniqueName: \"kubernetes.io/projected/8ed18047-e002-419d-b950-2535d4d778c1-kube-api-access-nzg4g\") pod \"heat-operator-controller-manager-658dd65b86-xk5t7\" (UID: \"8ed18047-e002-419d-b950-2535d4d778c1\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485455 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtss\" (UniqueName: \"kubernetes.io/projected/13c73aaf-30a7-4530-afff-39ec069fccde-kube-api-access-tmtss\") pod \"designate-operator-controller-manager-66f8b87655-sxcmb\" (UID: \"13c73aaf-30a7-4530-afff-39ec069fccde\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psspt\" (UniqueName: \"kubernetes.io/projected/45a74d9c-8e20-4f90-ad8b-8e139ad592fd-kube-api-access-psspt\") pod \"glance-operator-controller-manager-7b549fc966-2lcpm\" (UID: \"45a74d9c-8e20-4f90-ad8b-8e139ad592fd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pq9s\" (UniqueName: \"kubernetes.io/projected/e4095c6a-c9c9-42c0-b79e-a4f467563d27-kube-api-access-4pq9s\") pod \"cinder-operator-controller-manager-78979fc445-pk2cc\" (UID: \"e4095c6a-c9c9-42c0-b79e-a4f467563d27\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485719 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thk6q\" (UniqueName: \"kubernetes.io/projected/78091396-35cf-4a65-878b-02705fd27e09-kube-api-access-thk6q\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-wlwkg\" (UID: \"78091396-35cf-4a65-878b-02705fd27e09\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.485763 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxpc5\" (UniqueName: \"kubernetes.io/projected/675aef60-25dd-4113-a4cf-2f9b91a21150-kube-api-access-mxpc5\") pod \"barbican-operator-controller-manager-f6f74d6db-8p5g8\" (UID: \"675aef60-25dd-4113-a4cf-2f9b91a21150\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.509346 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-d6psw"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.513124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.546253 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mq6sc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.548581 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psspt\" (UniqueName: \"kubernetes.io/projected/45a74d9c-8e20-4f90-ad8b-8e139ad592fd-kube-api-access-psspt\") pod \"glance-operator-controller-manager-7b549fc966-2lcpm\" (UID: \"45a74d9c-8e20-4f90-ad8b-8e139ad592fd\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.556953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzg4g\" (UniqueName: \"kubernetes.io/projected/8ed18047-e002-419d-b950-2535d4d778c1-kube-api-access-nzg4g\") pod \"heat-operator-controller-manager-658dd65b86-xk5t7\" (UID: \"8ed18047-e002-419d-b950-2535d4d778c1\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.557492 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxpc5\" (UniqueName: \"kubernetes.io/projected/675aef60-25dd-4113-a4cf-2f9b91a21150-kube-api-access-mxpc5\") pod \"barbican-operator-controller-manager-f6f74d6db-8p5g8\" (UID: \"675aef60-25dd-4113-a4cf-2f9b91a21150\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.565048 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pq9s\" (UniqueName: \"kubernetes.io/projected/e4095c6a-c9c9-42c0-b79e-a4f467563d27-kube-api-access-4pq9s\") pod \"cinder-operator-controller-manager-78979fc445-pk2cc\" (UID: \"e4095c6a-c9c9-42c0-b79e-a4f467563d27\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.578758 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.579549 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.582283 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtss\" (UniqueName: \"kubernetes.io/projected/13c73aaf-30a7-4530-afff-39ec069fccde-kube-api-access-tmtss\") pod \"designate-operator-controller-manager-66f8b87655-sxcmb\" (UID: \"13c73aaf-30a7-4530-afff-39ec069fccde\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.591500 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7fgm\" (UniqueName: \"kubernetes.io/projected/31214035-ff7b-4c07-87b8-52a98b09cd52-kube-api-access-w7fgm\") pod \"manila-operator-controller-manager-598945d5b8-95md4\" (UID: \"31214035-ff7b-4c07-87b8-52a98b09cd52\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.591593 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thk6q\" (UniqueName: \"kubernetes.io/projected/78091396-35cf-4a65-878b-02705fd27e09-kube-api-access-thk6q\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-wlwkg\" (UID: \"78091396-35cf-4a65-878b-02705fd27e09\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.601946 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vx6wv" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.602120 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.610818 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.611788 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.615986 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p4n5w" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.631620 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.636743 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.640637 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thk6q\" (UniqueName: \"kubernetes.io/projected/78091396-35cf-4a65-878b-02705fd27e09-kube-api-access-thk6q\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-wlwkg\" (UID: \"78091396-35cf-4a65-878b-02705fd27e09\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.644122 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.651799 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.655811 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hw59h" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.665977 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.666701 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.674611 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.675130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6dzlh" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.685630 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.693926 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbdj\" (UniqueName: \"kubernetes.io/projected/0adfe2b3-9c76-4213-a856-e834ff2b24e0-kube-api-access-2cbdj\") pod \"keystone-operator-controller-manager-568985c78-d6psw\" (UID: \"0adfe2b3-9c76-4213-a856-e834ff2b24e0\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.693975 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.694012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7fgm\" (UniqueName: \"kubernetes.io/projected/31214035-ff7b-4c07-87b8-52a98b09cd52-kube-api-access-w7fgm\") pod \"manila-operator-controller-manager-598945d5b8-95md4\" (UID: \"31214035-ff7b-4c07-87b8-52a98b09cd52\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.694034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbw7b\" (UniqueName: \"kubernetes.io/projected/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-kube-api-access-tbw7b\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.699723 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.700902 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.708575 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.709514 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.712444 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ddvs7" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.721840 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-h84nl" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.733026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7fgm\" (UniqueName: \"kubernetes.io/projected/31214035-ff7b-4c07-87b8-52a98b09cd52-kube-api-access-w7fgm\") pod \"manila-operator-controller-manager-598945d5b8-95md4\" (UID: \"31214035-ff7b-4c07-87b8-52a98b09cd52\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.739238 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.739732 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.740065 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.743674 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-86rv2" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.752712 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.783979 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.796922 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jkl\" (UniqueName: \"kubernetes.io/projected/dc3a99b3-36d9-41bc-94f7-74b47980f602-kube-api-access-62jkl\") pod \"neutron-operator-controller-manager-7cd87b778f-dt8d6\" (UID: \"dc3a99b3-36d9-41bc-94f7-74b47980f602\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.796965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbw7b\" (UniqueName: \"kubernetes.io/projected/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-kube-api-access-tbw7b\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.797007 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxbh\" (UniqueName: \"kubernetes.io/projected/c4804e80-40f4-4f53-abfb-cafc1299f889-kube-api-access-zwxbh\") pod \"mariadb-operator-controller-manager-7b88bfc995-vp89g\" (UID: \"c4804e80-40f4-4f53-abfb-cafc1299f889\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.797031 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9czn\" (UniqueName: \"kubernetes.io/projected/76d489d2-17da-4af8-8fc5-d8ce6451a45c-kube-api-access-l9czn\") pod \"nova-operator-controller-manager-5fbbf8b6cc-fzx6g\" (UID: \"76d489d2-17da-4af8-8fc5-d8ce6451a45c\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.797065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbdj\" (UniqueName: \"kubernetes.io/projected/0adfe2b3-9c76-4213-a856-e834ff2b24e0-kube-api-access-2cbdj\") pod \"keystone-operator-controller-manager-568985c78-d6psw\" (UID: \"0adfe2b3-9c76-4213-a856-e834ff2b24e0\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.797090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjw8\" (UniqueName: \"kubernetes.io/projected/1e7c8270-346b-429d-a775-abb648245a40-kube-api-access-2jjw8\") pod \"ironic-operator-controller-manager-f99f54bc8-2cfvd\" (UID: \"1e7c8270-346b-429d-a775-abb648245a40\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.797110 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: E0103 04:31:13.797217 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:13 crc kubenswrapper[4865]: E0103 04:31:13.797262 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:14.297246634 +0000 UTC m=+901.414299819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.823240 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-d6psw"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.826017 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbdj\" (UniqueName: \"kubernetes.io/projected/0adfe2b3-9c76-4213-a856-e834ff2b24e0-kube-api-access-2cbdj\") pod \"keystone-operator-controller-manager-568985c78-d6psw\" (UID: \"0adfe2b3-9c76-4213-a856-e834ff2b24e0\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.829300 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbw7b\" (UniqueName: \"kubernetes.io/projected/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-kube-api-access-tbw7b\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.834758 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.845310 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.846148 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.850494 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rd6xp" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.850796 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.852776 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.857969 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.860893 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.866614 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.867568 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.870578 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.871094 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ggn6s" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.874501 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.879592 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.885799 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.890224 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898364 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jkl\" (UniqueName: \"kubernetes.io/projected/dc3a99b3-36d9-41bc-94f7-74b47980f602-kube-api-access-62jkl\") pod \"neutron-operator-controller-manager-7cd87b778f-dt8d6\" (UID: \"dc3a99b3-36d9-41bc-94f7-74b47980f602\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpb5b\" (UniqueName: \"kubernetes.io/projected/a179b7cc-8be0-4956-83ad-7b8b8087103b-kube-api-access-gpb5b\") pod \"ovn-operator-controller-manager-bf6d4f946-gsncl\" (UID: \"a179b7cc-8be0-4956-83ad-7b8b8087103b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwxbh\" (UniqueName: \"kubernetes.io/projected/c4804e80-40f4-4f53-abfb-cafc1299f889-kube-api-access-zwxbh\") pod \"mariadb-operator-controller-manager-7b88bfc995-vp89g\" (UID: \"c4804e80-40f4-4f53-abfb-cafc1299f889\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898607 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9czn\" (UniqueName: \"kubernetes.io/projected/76d489d2-17da-4af8-8fc5-d8ce6451a45c-kube-api-access-l9czn\") pod \"nova-operator-controller-manager-5fbbf8b6cc-fzx6g\" (UID: \"76d489d2-17da-4af8-8fc5-d8ce6451a45c\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898635 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42v9\" (UniqueName: \"kubernetes.io/projected/1785ab82-9c1c-41c0-aa07-0285dd49b221-kube-api-access-n42v9\") pod \"octavia-operator-controller-manager-68c649d9d-8jc6g\" (UID: \"1785ab82-9c1c-41c0-aa07-0285dd49b221\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.898674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jjw8\" (UniqueName: \"kubernetes.io/projected/1e7c8270-346b-429d-a775-abb648245a40-kube-api-access-2jjw8\") pod \"ironic-operator-controller-manager-f99f54bc8-2cfvd\" (UID: \"1e7c8270-346b-429d-a775-abb648245a40\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.899178 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.904664 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tg9cs" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.909559 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.910518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.914872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.915179 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lnzgf" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.917884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwxbh\" (UniqueName: \"kubernetes.io/projected/c4804e80-40f4-4f53-abfb-cafc1299f889-kube-api-access-zwxbh\") pod \"mariadb-operator-controller-manager-7b88bfc995-vp89g\" (UID: \"c4804e80-40f4-4f53-abfb-cafc1299f889\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.918364 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.920840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jjw8\" (UniqueName: \"kubernetes.io/projected/1e7c8270-346b-429d-a775-abb648245a40-kube-api-access-2jjw8\") pod \"ironic-operator-controller-manager-f99f54bc8-2cfvd\" (UID: \"1e7c8270-346b-429d-a775-abb648245a40\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.921351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9czn\" (UniqueName: \"kubernetes.io/projected/76d489d2-17da-4af8-8fc5-d8ce6451a45c-kube-api-access-l9czn\") pod \"nova-operator-controller-manager-5fbbf8b6cc-fzx6g\" (UID: \"76d489d2-17da-4af8-8fc5-d8ce6451a45c\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.927048 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jkl\" (UniqueName: \"kubernetes.io/projected/dc3a99b3-36d9-41bc-94f7-74b47980f602-kube-api-access-62jkl\") pod \"neutron-operator-controller-manager-7cd87b778f-dt8d6\" (UID: \"dc3a99b3-36d9-41bc-94f7-74b47980f602\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.928302 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.930768 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.933406 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.934767 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ww86t" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.939949 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.951869 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.956128 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tt9wg" Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.977736 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.988337 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb"] Jan 03 04:31:13 crc kubenswrapper[4865]: I0103 04:31:13.997467 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000570 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42v9\" (UniqueName: \"kubernetes.io/projected/1785ab82-9c1c-41c0-aa07-0285dd49b221-kube-api-access-n42v9\") pod \"octavia-operator-controller-manager-68c649d9d-8jc6g\" (UID: \"1785ab82-9c1c-41c0-aa07-0285dd49b221\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000679 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9gk\" (UniqueName: \"kubernetes.io/projected/aae3d614-123b-48a1-81fa-84f2c04b3923-kube-api-access-2p9gk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000775 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpb5b\" (UniqueName: \"kubernetes.io/projected/a179b7cc-8be0-4956-83ad-7b8b8087103b-kube-api-access-gpb5b\") pod \"ovn-operator-controller-manager-bf6d4f946-gsncl\" (UID: \"a179b7cc-8be0-4956-83ad-7b8b8087103b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000801 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpvg\" (UniqueName: \"kubernetes.io/projected/ac1ae731-f2d2-436a-b3ef-641ebf79814d-kube-api-access-frpvg\") pod \"swift-operator-controller-manager-bb586bbf4-9hsk5\" (UID: \"ac1ae731-f2d2-436a-b3ef-641ebf79814d\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000825 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvp66\" (UniqueName: \"kubernetes.io/projected/500fd3bd-494f-428c-9437-a71add6116d6-kube-api-access-pvp66\") pod \"placement-operator-controller-manager-9b6f8f78c-zfntt\" (UID: \"500fd3bd-494f-428c-9437-a71add6116d6\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.000857 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phbt\" (UniqueName: \"kubernetes.io/projected/56f034ca-a5ed-4b5b-89ca-82ff95662601-kube-api-access-5phbt\") pod \"telemetry-operator-controller-manager-68d988df55-kgjdb\" (UID: \"56f034ca-a5ed-4b5b-89ca-82ff95662601\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.011033 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.015362 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.017305 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.021447 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.021447 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.021472 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mcnzf" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.021697 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42v9\" (UniqueName: \"kubernetes.io/projected/1785ab82-9c1c-41c0-aa07-0285dd49b221-kube-api-access-n42v9\") pod \"octavia-operator-controller-manager-68c649d9d-8jc6g\" (UID: \"1785ab82-9c1c-41c0-aa07-0285dd49b221\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.022105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpb5b\" (UniqueName: \"kubernetes.io/projected/a179b7cc-8be0-4956-83ad-7b8b8087103b-kube-api-access-gpb5b\") pod \"ovn-operator-controller-manager-bf6d4f946-gsncl\" (UID: \"a179b7cc-8be0-4956-83ad-7b8b8087103b\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.022846 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.024685 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.030546 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b8nms" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.033603 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.058335 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.072115 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.101417 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102020 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jjc\" (UniqueName: \"kubernetes.io/projected/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-kube-api-access-25jjc\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102093 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9gk\" (UniqueName: \"kubernetes.io/projected/aae3d614-123b-48a1-81fa-84f2c04b3923-kube-api-access-2p9gk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frpvg\" (UniqueName: \"kubernetes.io/projected/ac1ae731-f2d2-436a-b3ef-641ebf79814d-kube-api-access-frpvg\") pod \"swift-operator-controller-manager-bb586bbf4-9hsk5\" (UID: \"ac1ae731-f2d2-436a-b3ef-641ebf79814d\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102145 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvp66\" (UniqueName: \"kubernetes.io/projected/500fd3bd-494f-428c-9437-a71add6116d6-kube-api-access-pvp66\") pod \"placement-operator-controller-manager-9b6f8f78c-zfntt\" (UID: \"500fd3bd-494f-428c-9437-a71add6116d6\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102167 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phbt\" (UniqueName: \"kubernetes.io/projected/56f034ca-a5ed-4b5b-89ca-82ff95662601-kube-api-access-5phbt\") pod \"telemetry-operator-controller-manager-68d988df55-kgjdb\" (UID: \"56f034ca-a5ed-4b5b-89ca-82ff95662601\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.102192 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102203 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjhm\" (UniqueName: \"kubernetes.io/projected/6af3b14c-24a5-4cf7-8cad-9583e2eb0b40-kube-api-access-dqjhm\") pod \"test-operator-controller-manager-6c866cfdcb-fggfw\" (UID: \"6af3b14c-24a5-4cf7-8cad-9583e2eb0b40\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.102251 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:14.602231139 +0000 UTC m=+901.719284324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102276 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvfs\" (UniqueName: \"kubernetes.io/projected/8006ff7b-528f-4750-ba59-5aaacd35649b-kube-api-access-dsvfs\") pod \"watcher-operator-controller-manager-9dbdf6486-cshl4\" (UID: \"8006ff7b-528f-4750-ba59-5aaacd35649b\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.102325 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.108326 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.121850 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvp66\" (UniqueName: \"kubernetes.io/projected/500fd3bd-494f-428c-9437-a71add6116d6-kube-api-access-pvp66\") pod \"placement-operator-controller-manager-9b6f8f78c-zfntt\" (UID: \"500fd3bd-494f-428c-9437-a71add6116d6\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.123994 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phbt\" (UniqueName: \"kubernetes.io/projected/56f034ca-a5ed-4b5b-89ca-82ff95662601-kube-api-access-5phbt\") pod \"telemetry-operator-controller-manager-68d988df55-kgjdb\" (UID: \"56f034ca-a5ed-4b5b-89ca-82ff95662601\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.125096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9gk\" (UniqueName: \"kubernetes.io/projected/aae3d614-123b-48a1-81fa-84f2c04b3923-kube-api-access-2p9gk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.128183 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frpvg\" (UniqueName: \"kubernetes.io/projected/ac1ae731-f2d2-436a-b3ef-641ebf79814d-kube-api-access-frpvg\") pod \"swift-operator-controller-manager-bb586bbf4-9hsk5\" (UID: \"ac1ae731-f2d2-436a-b3ef-641ebf79814d\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.142017 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.153782 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.171958 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.184991 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204264 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt55r\" (UniqueName: \"kubernetes.io/projected/ebc5bbec-4b11-47c8-a018-ddefda88a53b-kube-api-access-qt55r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8zvh8\" (UID: \"ebc5bbec-4b11-47c8-a018-ddefda88a53b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jjc\" (UniqueName: \"kubernetes.io/projected/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-kube-api-access-25jjc\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204424 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204446 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjhm\" (UniqueName: \"kubernetes.io/projected/6af3b14c-24a5-4cf7-8cad-9583e2eb0b40-kube-api-access-dqjhm\") pod \"test-operator-controller-manager-6c866cfdcb-fggfw\" (UID: \"6af3b14c-24a5-4cf7-8cad-9583e2eb0b40\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvfs\" (UniqueName: \"kubernetes.io/projected/8006ff7b-528f-4750-ba59-5aaacd35649b-kube-api-access-dsvfs\") pod \"watcher-operator-controller-manager-9dbdf6486-cshl4\" (UID: \"8006ff7b-528f-4750-ba59-5aaacd35649b\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.204562 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.204682 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.204729 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:14.704714799 +0000 UTC m=+901.821767984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.204956 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.205006 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:14.704989636 +0000 UTC m=+901.822042821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.215793 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.224439 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jjc\" (UniqueName: \"kubernetes.io/projected/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-kube-api-access-25jjc\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.234652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvfs\" (UniqueName: \"kubernetes.io/projected/8006ff7b-528f-4750-ba59-5aaacd35649b-kube-api-access-dsvfs\") pod \"watcher-operator-controller-manager-9dbdf6486-cshl4\" (UID: \"8006ff7b-528f-4750-ba59-5aaacd35649b\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.234723 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.236546 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjhm\" (UniqueName: \"kubernetes.io/projected/6af3b14c-24a5-4cf7-8cad-9583e2eb0b40-kube-api-access-dqjhm\") pod \"test-operator-controller-manager-6c866cfdcb-fggfw\" (UID: \"6af3b14c-24a5-4cf7-8cad-9583e2eb0b40\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.265498 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.269510 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.279593 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.306528 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt55r\" (UniqueName: \"kubernetes.io/projected/ebc5bbec-4b11-47c8-a018-ddefda88a53b-kube-api-access-qt55r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8zvh8\" (UID: \"ebc5bbec-4b11-47c8-a018-ddefda88a53b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.307358 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.307426 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:15.307408455 +0000 UTC m=+902.424461640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.307468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.327670 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt55r\" (UniqueName: \"kubernetes.io/projected/ebc5bbec-4b11-47c8-a018-ddefda88a53b-kube-api-access-qt55r\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8zvh8\" (UID: \"ebc5bbec-4b11-47c8-a018-ddefda88a53b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.361545 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.487288 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.518182 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.525153 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.534356 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-95md4"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.554047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.614729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.614903 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.614945 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:15.614932189 +0000 UTC m=+902.731985374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.716148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.716198 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.716357 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.716434 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:15.716419692 +0000 UTC m=+902.833472877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.716740 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: E0103 04:31:14.716765 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:15.716757842 +0000 UTC m=+902.833811027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.803228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" event={"ID":"e4095c6a-c9c9-42c0-b79e-a4f467563d27","Type":"ContainerStarted","Data":"1bbc05124668a3b99cadda2c0b91e2df6c9f644661c5b9da4b87b40bdd019567"} Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.805742 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" event={"ID":"31214035-ff7b-4c07-87b8-52a98b09cd52","Type":"ContainerStarted","Data":"91f613e7f41b9eabe88bf8527ef6546dfa8f9a6408eb66acb0757add168c0670"} Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.807595 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" event={"ID":"675aef60-25dd-4113-a4cf-2f9b91a21150","Type":"ContainerStarted","Data":"2189eeef0b93a0e39201f1e38b3e917f9b5b45a6f4934b3b54a2cdaf692d8d90"} Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.808993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" event={"ID":"13c73aaf-30a7-4530-afff-39ec069fccde","Type":"ContainerStarted","Data":"b8a06eaec091d6a3cbb31cc19bf7744e4e4b5a4171187023d3e4d5aef643d6b4"} Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.809980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" event={"ID":"45a74d9c-8e20-4f90-ad8b-8e139ad592fd","Type":"ContainerStarted","Data":"5852c478f76dc15ef17cce76e960180b381ceabef315e5a9fdb15b833c40be16"} Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.909229 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.916748 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.933533 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.948757 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-d6psw"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.977536 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.977625 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd"] Jan 03 04:31:14 crc kubenswrapper[4865]: I0103 04:31:14.981275 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.064917 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.074756 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.078239 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.089568 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw"] Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.097769 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsvfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-cshl4_openstack-operators(8006ff7b-528f-4750-ba59-5aaacd35649b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098440 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvp66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-zfntt_openstack-operators(500fd3bd-494f-428c-9437-a71add6116d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098503 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frpvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-9hsk5_openstack-operators(ac1ae731-f2d2-436a-b3ef-641ebf79814d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098544 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qt55r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8zvh8_openstack-operators(ebc5bbec-4b11-47c8-a018-ddefda88a53b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098749 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n42v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-8jc6g_openstack-operators(1785ab82-9c1c-41c0-aa07-0285dd49b221): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098887 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpb5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-gsncl_openstack-operators(a179b7cc-8be0-4956-83ad-7b8b8087103b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.098937 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqjhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-fggfw_openstack-operators(6af3b14c-24a5-4cf7-8cad-9583e2eb0b40): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.100914 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" podUID="6af3b14c-24a5-4cf7-8cad-9583e2eb0b40" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.100961 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" podUID="8006ff7b-528f-4750-ba59-5aaacd35649b" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.100979 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" podUID="500fd3bd-494f-428c-9437-a71add6116d6" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.100997 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" podUID="ac1ae731-f2d2-436a-b3ef-641ebf79814d" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.101013 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" podUID="ebc5bbec-4b11-47c8-a018-ddefda88a53b" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.101030 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" podUID="1785ab82-9c1c-41c0-aa07-0285dd49b221" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.101049 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" podUID="a179b7cc-8be0-4956-83ad-7b8b8087103b" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.103003 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5phbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-kgjdb_openstack-operators(56f034ca-a5ed-4b5b-89ca-82ff95662601): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.104255 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" podUID="56f034ca-a5ed-4b5b-89ca-82ff95662601" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.105237 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.111555 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.117054 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.122217 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl"] Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.325915 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.326103 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.326289 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:17.326176377 +0000 UTC m=+904.443229572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.629548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.629720 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.629782 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:17.629765514 +0000 UTC m=+904.746818709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.730818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.730979 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.731025 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.731107 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:17.731087543 +0000 UTC m=+904.848140728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.731162 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.731261 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:17.731232197 +0000 UTC m=+904.848285422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.824002 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" event={"ID":"76d489d2-17da-4af8-8fc5-d8ce6451a45c","Type":"ContainerStarted","Data":"99fded7af78c0011fc6f2922e8585567c48f42fba88104711d3227d32a7e8780"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.825324 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" event={"ID":"500fd3bd-494f-428c-9437-a71add6116d6","Type":"ContainerStarted","Data":"f5853e32e9270e80c85267caf0d8288c88fb26617b14e2150348c09d05388ef1"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.827210 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" event={"ID":"ebc5bbec-4b11-47c8-a018-ddefda88a53b","Type":"ContainerStarted","Data":"e9876e1ce7d39c30f2b2718b04d9124db0c06ef99bb3a2f3b60db19d2979e92b"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.827865 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" podUID="500fd3bd-494f-428c-9437-a71add6116d6" Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.829141 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" podUID="ebc5bbec-4b11-47c8-a018-ddefda88a53b" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.829308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" event={"ID":"ac1ae731-f2d2-436a-b3ef-641ebf79814d","Type":"ContainerStarted","Data":"4ecde73708588b112ce4863720d16eec13baa9c8c00ae407c9db0f5ee224cc9b"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.830877 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" podUID="ac1ae731-f2d2-436a-b3ef-641ebf79814d" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.831196 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" event={"ID":"c4804e80-40f4-4f53-abfb-cafc1299f889","Type":"ContainerStarted","Data":"651d365d50e6b7192c89167602efd77c57e7648e03ae43887dc0e69ff556f2d6"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.834647 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" event={"ID":"8006ff7b-528f-4750-ba59-5aaacd35649b","Type":"ContainerStarted","Data":"b5abe62215bec0a19b4de9d7dd8c5293e488eff661c11ca7ef06dcdf698a39a8"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.836425 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" podUID="8006ff7b-528f-4750-ba59-5aaacd35649b" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.837455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" event={"ID":"a179b7cc-8be0-4956-83ad-7b8b8087103b","Type":"ContainerStarted","Data":"66aa779c9b8cb4490e7b5b8ea0aca026b0e4b6f1c1fb6e5b86baee2682da1c75"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.838797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" event={"ID":"1e7c8270-346b-429d-a775-abb648245a40","Type":"ContainerStarted","Data":"4a810005dfc0c410ea280e7637199f3ec5ace4645986293961d355581f404acc"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.852765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" event={"ID":"78091396-35cf-4a65-878b-02705fd27e09","Type":"ContainerStarted","Data":"2920bcecc6b0fb8af1e42f353bea2f7d83cd6e5fd45385927dd40baaa6392369"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.854461 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" event={"ID":"0adfe2b3-9c76-4213-a856-e834ff2b24e0","Type":"ContainerStarted","Data":"f98b34679c891a484925a164bb26df5f98322616d2456d05756eb2ba9f07483e"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.854508 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" podUID="a179b7cc-8be0-4956-83ad-7b8b8087103b" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.856166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" event={"ID":"6af3b14c-24a5-4cf7-8cad-9583e2eb0b40","Type":"ContainerStarted","Data":"76f9923b3dadfb902dfeaf2ef94c97c6c3e6a5360479eb5f6117527d901e1020"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.861854 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" podUID="6af3b14c-24a5-4cf7-8cad-9583e2eb0b40" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.867056 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" event={"ID":"56f034ca-a5ed-4b5b-89ca-82ff95662601","Type":"ContainerStarted","Data":"cba33ebdd8691e3a7cd163e57fa6512883e19b562f062b65ad450e0c0df0dd0a"} Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.868554 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" podUID="56f034ca-a5ed-4b5b-89ca-82ff95662601" Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.869561 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" event={"ID":"dc3a99b3-36d9-41bc-94f7-74b47980f602","Type":"ContainerStarted","Data":"218c9fbf87211819a966d76f8ddf49fb087c7df6da376928d7808b8728ccaf91"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.876582 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" event={"ID":"8ed18047-e002-419d-b950-2535d4d778c1","Type":"ContainerStarted","Data":"c276a59dfbd2c535fa4994fbf3c541c7738adf1f64189a6e3b472af6b0373dc9"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.878098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" event={"ID":"1785ab82-9c1c-41c0-aa07-0285dd49b221","Type":"ContainerStarted","Data":"7113660900eba483a53673c23a5bcf9ea74fe25f21fbdb3ef5564d0621e7c2d3"} Jan 03 04:31:15 crc kubenswrapper[4865]: I0103 04:31:15.878334 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zwzgn" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="registry-server" containerID="cri-o://34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8" gracePeriod=2 Jan 03 04:31:15 crc kubenswrapper[4865]: E0103 04:31:15.880351 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" podUID="1785ab82-9c1c-41c0-aa07-0285dd49b221" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.726973 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.858991 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j4jd\" (UniqueName: \"kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd\") pod \"87340c91-82ae-4300-852c-fcc4ee3780f3\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.859087 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content\") pod \"87340c91-82ae-4300-852c-fcc4ee3780f3\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.859163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities\") pod \"87340c91-82ae-4300-852c-fcc4ee3780f3\" (UID: \"87340c91-82ae-4300-852c-fcc4ee3780f3\") " Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.860222 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities" (OuterVolumeSpecName: "utilities") pod "87340c91-82ae-4300-852c-fcc4ee3780f3" (UID: "87340c91-82ae-4300-852c-fcc4ee3780f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.883235 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd" (OuterVolumeSpecName: "kube-api-access-6j4jd") pod "87340c91-82ae-4300-852c-fcc4ee3780f3" (UID: "87340c91-82ae-4300-852c-fcc4ee3780f3"). InnerVolumeSpecName "kube-api-access-6j4jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.913767 4865 generic.go:334] "Generic (PLEG): container finished" podID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerID="34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8" exitCode=0 Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.913841 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zwzgn" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.913896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerDied","Data":"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8"} Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.913925 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zwzgn" event={"ID":"87340c91-82ae-4300-852c-fcc4ee3780f3","Type":"ContainerDied","Data":"7a4b79c45ea641ffc21e7fd82cc019440aca056fbf3b722a599d4034f42821c0"} Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.913943 4865 scope.go:117] "RemoveContainer" containerID="34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.915780 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" podUID="a179b7cc-8be0-4956-83ad-7b8b8087103b" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.916927 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" podUID="1785ab82-9c1c-41c0-aa07-0285dd49b221" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.916991 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" podUID="8006ff7b-528f-4750-ba59-5aaacd35649b" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.918147 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" podUID="ac1ae731-f2d2-436a-b3ef-641ebf79814d" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.918202 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" podUID="500fd3bd-494f-428c-9437-a71add6116d6" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.918285 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" podUID="6af3b14c-24a5-4cf7-8cad-9583e2eb0b40" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.918773 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" podUID="ebc5bbec-4b11-47c8-a018-ddefda88a53b" Jan 03 04:31:16 crc kubenswrapper[4865]: E0103 04:31:16.919032 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" podUID="56f034ca-a5ed-4b5b-89ca-82ff95662601" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.949688 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87340c91-82ae-4300-852c-fcc4ee3780f3" (UID: "87340c91-82ae-4300-852c-fcc4ee3780f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.955692 4865 scope.go:117] "RemoveContainer" containerID="f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.961009 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j4jd\" (UniqueName: \"kubernetes.io/projected/87340c91-82ae-4300-852c-fcc4ee3780f3-kube-api-access-6j4jd\") on node \"crc\" DevicePath \"\"" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.961039 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.961052 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87340c91-82ae-4300-852c-fcc4ee3780f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:31:16 crc kubenswrapper[4865]: I0103 04:31:16.994011 4865 scope.go:117] "RemoveContainer" containerID="c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.136833 4865 scope.go:117] "RemoveContainer" containerID="34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.137930 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8\": container with ID starting with 34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8 not found: ID does not exist" containerID="34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.137980 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8"} err="failed to get container status \"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8\": rpc error: code = NotFound desc = could not find container \"34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8\": container with ID starting with 34ad0b524e117a0c6b4b3ec5385e09e24688156efa34816095b74dbfa60b4eb8 not found: ID does not exist" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.138012 4865 scope.go:117] "RemoveContainer" containerID="f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.139004 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664\": container with ID starting with f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664 not found: ID does not exist" containerID="f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.139036 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664"} err="failed to get container status \"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664\": rpc error: code = NotFound desc = could not find container \"f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664\": container with ID starting with f21c4fcf620692d3b353d6015c4b3cef78762cf8e0643235bfad3a22fa909664 not found: ID does not exist" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.139055 4865 scope.go:117] "RemoveContainer" containerID="c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.139734 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85\": container with ID starting with c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85 not found: ID does not exist" containerID="c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.139759 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85"} err="failed to get container status \"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85\": rpc error: code = NotFound desc = could not find container \"c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85\": container with ID starting with c7da34546d5f062f44acb44b2b2ce776947793fe9a8a9292f33a8de6230deb85 not found: ID does not exist" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.234117 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.238253 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zwzgn"] Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.365998 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.366199 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.366312 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:21.36629281 +0000 UTC m=+908.483345995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.670123 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.670390 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.670445 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:21.670428342 +0000 UTC m=+908.787481527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.771455 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:17 crc kubenswrapper[4865]: I0103 04:31:17.771515 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.771718 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.771722 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.771783 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:21.771766562 +0000 UTC m=+908.888819747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:17 crc kubenswrapper[4865]: E0103 04:31:17.771903 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:21.771875745 +0000 UTC m=+908.888928930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:19 crc kubenswrapper[4865]: I0103 04:31:19.166945 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" path="/var/lib/kubelet/pods/87340c91-82ae-4300-852c-fcc4ee3780f3/volumes" Jan 03 04:31:21 crc kubenswrapper[4865]: I0103 04:31:21.462563 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.462769 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.463461 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:29.463440941 +0000 UTC m=+916.580494136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: I0103 04:31:21.766977 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.767146 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.767244 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:29.767217494 +0000 UTC m=+916.884270679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: I0103 04:31:21.868021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:21 crc kubenswrapper[4865]: I0103 04:31:21.868097 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.868335 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.868356 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.868443 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:29.868423217 +0000 UTC m=+916.985476402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:21 crc kubenswrapper[4865]: E0103 04:31:21.868487 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:29.868460308 +0000 UTC m=+916.985513533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.509444 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.509655 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.510672 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert podName:ce7f03b5-6280-4cd8-b3a6-865329b1b9ce nodeName:}" failed. No retries permitted until 2026-01-03 04:31:45.510648486 +0000 UTC m=+932.627701731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert") pod "infra-operator-controller-manager-648996cf74-xqj6p" (UID: "ce7f03b5-6280-4cd8-b3a6-865329b1b9ce") : secret "infra-operator-webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.814832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.815051 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.815131 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert podName:aae3d614-123b-48a1-81fa-84f2c04b3923 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:45.815108587 +0000 UTC m=+932.932161772 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" (UID: "aae3d614-123b-48a1-81fa-84f2c04b3923") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.884280 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.884566 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="extract-content" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.884578 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="extract-content" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.884594 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="registry-server" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.884600 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="registry-server" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.884611 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="extract-utilities" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.884617 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="extract-utilities" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.884740 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="87340c91-82ae-4300-852c-fcc4ee3780f3" containerName="registry-server" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.885601 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.920533 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.922221 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.922293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.922352 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.922467 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcj49\" (UniqueName: \"kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.922563 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.922606 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:45.922592339 +0000 UTC m=+933.039645524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "webhook-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.922677 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 03 04:31:29 crc kubenswrapper[4865]: I0103 04:31:29.922691 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:29 crc kubenswrapper[4865]: E0103 04:31:29.922763 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs podName:fe531b80-38a8-4a91-95c3-cd9ffe4dee91 nodeName:}" failed. No retries permitted until 2026-01-03 04:31:45.922740423 +0000 UTC m=+933.039793708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs") pod "openstack-operator-controller-manager-54cff86f68-wmvwl" (UID: "fe531b80-38a8-4a91-95c3-cd9ffe4dee91") : secret "metrics-server-cert" not found Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.023707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.023777 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj49\" (UniqueName: \"kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.023817 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.024270 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.024311 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.047673 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj49\" (UniqueName: \"kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49\") pod \"redhat-marketplace-6jxsz\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: I0103 04:31:30.262198 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:30 crc kubenswrapper[4865]: E0103 04:31:30.771483 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 03 04:31:30 crc kubenswrapper[4865]: E0103 04:31:30.771760 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9czn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-fzx6g_openstack-operators(76d489d2-17da-4af8-8fc5-d8ce6451a45c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:31:30 crc kubenswrapper[4865]: E0103 04:31:30.773365 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" podUID="76d489d2-17da-4af8-8fc5-d8ce6451a45c" Jan 03 04:31:31 crc kubenswrapper[4865]: E0103 04:31:31.031017 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" podUID="76d489d2-17da-4af8-8fc5-d8ce6451a45c" Jan 03 04:31:32 crc kubenswrapper[4865]: I0103 04:31:32.214977 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:31:32 crc kubenswrapper[4865]: W0103 04:31:32.304272 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod646ece76_4e13_4ce9_ac6a_71f27b883e30.slice/crio-d21c69a0ddd93320d1325bce8cb666715429b705c8a85186982fa3ac27a4cbce WatchSource:0}: Error finding container d21c69a0ddd93320d1325bce8cb666715429b705c8a85186982fa3ac27a4cbce: Status 404 returned error can't find the container with id d21c69a0ddd93320d1325bce8cb666715429b705c8a85186982fa3ac27a4cbce Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.044931 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" event={"ID":"e4095c6a-c9c9-42c0-b79e-a4f467563d27","Type":"ContainerStarted","Data":"0990bb5ec654f3d55f9d22742b70811fb018baa28b76b1c990d483b329c9518e"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.045483 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.049168 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" event={"ID":"78091396-35cf-4a65-878b-02705fd27e09","Type":"ContainerStarted","Data":"eca8777d7b411091e5fb9f2fdf6bb9cf308ebcbf024d1ef138c90bdab189f63e"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.049330 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.051204 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" event={"ID":"31214035-ff7b-4c07-87b8-52a98b09cd52","Type":"ContainerStarted","Data":"ab59dc9b1ef3cb4dc6c8ddf72d9fbf2493374c4a1820464d376a63b9700cad78"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.051279 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.052407 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" event={"ID":"675aef60-25dd-4113-a4cf-2f9b91a21150","Type":"ContainerStarted","Data":"35ebed2c34618b641f686f3ee0218732822bd6d3878cac7eae6c35f442c3215e"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.052628 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.053932 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerStarted","Data":"d21c69a0ddd93320d1325bce8cb666715429b705c8a85186982fa3ac27a4cbce"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.055173 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" event={"ID":"dc3a99b3-36d9-41bc-94f7-74b47980f602","Type":"ContainerStarted","Data":"06c420450e40f4b0bd1c639ed3d5bd1a1deba9bbb519b811e924fb63a64c17dc"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.055559 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.057536 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" event={"ID":"45a74d9c-8e20-4f90-ad8b-8e139ad592fd","Type":"ContainerStarted","Data":"22070be354691582db4458015f96529acb1e6e169acd05ad9e3c8e0b0320fbbd"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.065708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.089174 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" event={"ID":"1e7c8270-346b-429d-a775-abb648245a40","Type":"ContainerStarted","Data":"8538b800dd638c07b603c628446b827aeeb077c51beada75797560cb38026725"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.089288 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.095255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" event={"ID":"8ed18047-e002-419d-b950-2535d4d778c1","Type":"ContainerStarted","Data":"fa2b6907e1f413cc1409fe7f776417b58a61238d4f8f9720f1a9036f76682086"} Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.095390 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.096108 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" podStartSLOduration=3.966062593 podStartE2EDuration="20.09609396s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.606893241 +0000 UTC m=+901.723946426" lastFinishedPulling="2026-01-03 04:31:30.736924588 +0000 UTC m=+917.853977793" observedRunningTime="2026-01-03 04:31:33.077649714 +0000 UTC m=+920.194702899" watchObservedRunningTime="2026-01-03 04:31:33.09609396 +0000 UTC m=+920.213147145" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.099072 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" podStartSLOduration=3.079709806 podStartE2EDuration="20.0990667s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.957394671 +0000 UTC m=+902.074447866" lastFinishedPulling="2026-01-03 04:31:31.976751575 +0000 UTC m=+919.093804760" observedRunningTime="2026-01-03 04:31:33.09424386 +0000 UTC m=+920.211297035" watchObservedRunningTime="2026-01-03 04:31:33.0990667 +0000 UTC m=+920.216119885" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.116717 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" podStartSLOduration=3.38302173 podStartE2EDuration="20.116698115s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.614646252 +0000 UTC m=+901.731699437" lastFinishedPulling="2026-01-03 04:31:31.348322637 +0000 UTC m=+918.465375822" observedRunningTime="2026-01-03 04:31:33.115346708 +0000 UTC m=+920.232399893" watchObservedRunningTime="2026-01-03 04:31:33.116698115 +0000 UTC m=+920.233751320" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.130815 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" podStartSLOduration=3.284140173 podStartE2EDuration="20.130801154s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.50033248 +0000 UTC m=+901.617385665" lastFinishedPulling="2026-01-03 04:31:31.346993461 +0000 UTC m=+918.464046646" observedRunningTime="2026-01-03 04:31:33.130539937 +0000 UTC m=+920.247593122" watchObservedRunningTime="2026-01-03 04:31:33.130801154 +0000 UTC m=+920.247854339" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.153511 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" podStartSLOduration=4.059431163 podStartE2EDuration="20.153489414s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.642081266 +0000 UTC m=+901.759134441" lastFinishedPulling="2026-01-03 04:31:30.736139467 +0000 UTC m=+917.853192692" observedRunningTime="2026-01-03 04:31:33.143076814 +0000 UTC m=+920.260130009" watchObservedRunningTime="2026-01-03 04:31:33.153489414 +0000 UTC m=+920.270542599" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.172253 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" podStartSLOduration=4.424468734 podStartE2EDuration="20.172236939s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.988367362 +0000 UTC m=+902.105420547" lastFinishedPulling="2026-01-03 04:31:30.736135567 +0000 UTC m=+917.853188752" observedRunningTime="2026-01-03 04:31:33.159980799 +0000 UTC m=+920.277033984" watchObservedRunningTime="2026-01-03 04:31:33.172236939 +0000 UTC m=+920.289290124" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.222085 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" podStartSLOduration=3.864397705 podStartE2EDuration="20.222069159s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.989315367 +0000 UTC m=+902.106368552" lastFinishedPulling="2026-01-03 04:31:31.346986821 +0000 UTC m=+918.464040006" observedRunningTime="2026-01-03 04:31:33.192508134 +0000 UTC m=+920.309561329" watchObservedRunningTime="2026-01-03 04:31:33.222069159 +0000 UTC m=+920.339122344" Jan 03 04:31:33 crc kubenswrapper[4865]: I0103 04:31:33.246929 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" podStartSLOduration=4.462606432 podStartE2EDuration="20.246908208s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.951850941 +0000 UTC m=+902.068904126" lastFinishedPulling="2026-01-03 04:31:30.736152717 +0000 UTC m=+917.853205902" observedRunningTime="2026-01-03 04:31:33.222150691 +0000 UTC m=+920.339203876" watchObservedRunningTime="2026-01-03 04:31:33.246908208 +0000 UTC m=+920.363961393" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.104772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" event={"ID":"13c73aaf-30a7-4530-afff-39ec069fccde","Type":"ContainerStarted","Data":"87376a62bd997cd6f4220b56466875d87bf3d30da32df087df8b7541585979c6"} Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.105418 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.107952 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" event={"ID":"c4804e80-40f4-4f53-abfb-cafc1299f889","Type":"ContainerStarted","Data":"91b862814a9a6ed17ca9e523545a8f92b99c4092b7b94eda30abe01b621b79e0"} Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.108442 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.111298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" event={"ID":"0adfe2b3-9c76-4213-a856-e834ff2b24e0","Type":"ContainerStarted","Data":"861dd938c96d81897c28249ce219bdc9c78bd9f05b43621f803a8e88a075e982"} Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.111321 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.121322 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" podStartSLOduration=3.759303922 podStartE2EDuration="21.121301593s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.6149683 +0000 UTC m=+901.732021485" lastFinishedPulling="2026-01-03 04:31:31.976965961 +0000 UTC m=+919.094019156" observedRunningTime="2026-01-03 04:31:34.118928809 +0000 UTC m=+921.235981994" watchObservedRunningTime="2026-01-03 04:31:34.121301593 +0000 UTC m=+921.238354778" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.133841 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" podStartSLOduration=4.143446162 podStartE2EDuration="21.133806649s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.988731891 +0000 UTC m=+902.105785076" lastFinishedPulling="2026-01-03 04:31:31.979092378 +0000 UTC m=+919.096145563" observedRunningTime="2026-01-03 04:31:34.132910195 +0000 UTC m=+921.249963380" watchObservedRunningTime="2026-01-03 04:31:34.133806649 +0000 UTC m=+921.250859834" Jan 03 04:31:34 crc kubenswrapper[4865]: I0103 04:31:34.149907 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" podStartSLOduration=4.754247927 podStartE2EDuration="21.149892912s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.952270502 +0000 UTC m=+902.069323687" lastFinishedPulling="2026-01-03 04:31:31.347915487 +0000 UTC m=+918.464968672" observedRunningTime="2026-01-03 04:31:34.147850178 +0000 UTC m=+921.264903373" watchObservedRunningTime="2026-01-03 04:31:34.149892912 +0000 UTC m=+921.266946097" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.186993 4865 generic.go:334] "Generic (PLEG): container finished" podID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerID="b7fce8ebf54cb8fe59317301c732e9cbe86c63ff29428e19b2c2d397e14acf93" exitCode=0 Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.187073 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerDied","Data":"b7fce8ebf54cb8fe59317301c732e9cbe86c63ff29428e19b2c2d397e14acf93"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.189335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" event={"ID":"56f034ca-a5ed-4b5b-89ca-82ff95662601","Type":"ContainerStarted","Data":"8831d216bfe2b83406b0d7a0ebb4dccbb57ad5d171c1c6706280d013cf211359"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.189589 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.190710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" event={"ID":"8006ff7b-528f-4750-ba59-5aaacd35649b","Type":"ContainerStarted","Data":"c1a18b13253fd1ae3f15d7033592c802856c335d664112f9308d7c1188da3c0a"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.190917 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.193099 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" event={"ID":"a179b7cc-8be0-4956-83ad-7b8b8087103b","Type":"ContainerStarted","Data":"717182f762e669113042795d679eae4a16293831ba8f5f5c3518de6eb00bdc71"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.193301 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.195542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" event={"ID":"1785ab82-9c1c-41c0-aa07-0285dd49b221","Type":"ContainerStarted","Data":"652fade368fc78363847ce2ab2d968d39f0ffa1d64c9819ea08123b39944503b"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.195899 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.197304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" event={"ID":"500fd3bd-494f-428c-9437-a71add6116d6","Type":"ContainerStarted","Data":"54eff897a4a8c5dccd6aad817fdd03ffa729e6ed41fb1d082647da8f3a3007cb"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.197732 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.199016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" event={"ID":"6af3b14c-24a5-4cf7-8cad-9583e2eb0b40","Type":"ContainerStarted","Data":"7e66bbcfb5e2818cb3448b94a235b2dc603def8a879fe9e7434802d88ffebfc4"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.199438 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.201542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" event={"ID":"ebc5bbec-4b11-47c8-a018-ddefda88a53b","Type":"ContainerStarted","Data":"0808599d63abdc8f065c7112a18771b2be062c9f6969570cd77ed021fa30b5ca"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.205878 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" event={"ID":"ac1ae731-f2d2-436a-b3ef-641ebf79814d","Type":"ContainerStarted","Data":"ab8f0d4a9cdd52016d81142c0070e0324167239880ab1ccd10c3d80687bbb809"} Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.206040 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.244171 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" podStartSLOduration=11.30776152 podStartE2EDuration="28.244152619s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098293504 +0000 UTC m=+902.215346689" lastFinishedPulling="2026-01-03 04:31:32.034684603 +0000 UTC m=+919.151737788" observedRunningTime="2026-01-03 04:31:41.238448566 +0000 UTC m=+928.355501761" watchObservedRunningTime="2026-01-03 04:31:41.244152619 +0000 UTC m=+928.361205804" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.257747 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" podStartSLOduration=11.322554816 podStartE2EDuration="28.257726544s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098692984 +0000 UTC m=+902.215746169" lastFinishedPulling="2026-01-03 04:31:32.033864712 +0000 UTC m=+919.150917897" observedRunningTime="2026-01-03 04:31:41.256541823 +0000 UTC m=+928.373595028" watchObservedRunningTime="2026-01-03 04:31:41.257726544 +0000 UTC m=+928.374779729" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.276860 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" podStartSLOduration=3.393017798 podStartE2EDuration="28.276842438s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.097619386 +0000 UTC m=+902.214672561" lastFinishedPulling="2026-01-03 04:31:39.981443976 +0000 UTC m=+927.098497201" observedRunningTime="2026-01-03 04:31:41.273374235 +0000 UTC m=+928.390427420" watchObservedRunningTime="2026-01-03 04:31:41.276842438 +0000 UTC m=+928.393895633" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.298639 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8zvh8" podStartSLOduration=3.345601663 podStartE2EDuration="28.298621005s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098483139 +0000 UTC m=+902.215536324" lastFinishedPulling="2026-01-03 04:31:40.051502481 +0000 UTC m=+927.168555666" observedRunningTime="2026-01-03 04:31:41.297219807 +0000 UTC m=+928.414272992" watchObservedRunningTime="2026-01-03 04:31:41.298621005 +0000 UTC m=+928.415674190" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.316668 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" podStartSLOduration=3.433978231 podStartE2EDuration="28.31665009s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098775167 +0000 UTC m=+902.215828352" lastFinishedPulling="2026-01-03 04:31:39.981446986 +0000 UTC m=+927.098500211" observedRunningTime="2026-01-03 04:31:41.31371162 +0000 UTC m=+928.430764815" watchObservedRunningTime="2026-01-03 04:31:41.31665009 +0000 UTC m=+928.433703275" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.330090 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" podStartSLOduration=3.3782347 podStartE2EDuration="28.33007443s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.09888479 +0000 UTC m=+902.215937975" lastFinishedPulling="2026-01-03 04:31:40.05072447 +0000 UTC m=+927.167777705" observedRunningTime="2026-01-03 04:31:41.328129958 +0000 UTC m=+928.445183143" watchObservedRunningTime="2026-01-03 04:31:41.33007443 +0000 UTC m=+928.447127615" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.342671 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" podStartSLOduration=3.350459952 podStartE2EDuration="28.342653229s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098354395 +0000 UTC m=+902.215407580" lastFinishedPulling="2026-01-03 04:31:40.090547662 +0000 UTC m=+927.207600857" observedRunningTime="2026-01-03 04:31:41.338589689 +0000 UTC m=+928.455642874" watchObservedRunningTime="2026-01-03 04:31:41.342653229 +0000 UTC m=+928.459706414" Jan 03 04:31:41 crc kubenswrapper[4865]: I0103 04:31:41.356301 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" podStartSLOduration=3.4044435650000002 podStartE2EDuration="28.356279866s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:15.098770486 +0000 UTC m=+902.215823671" lastFinishedPulling="2026-01-03 04:31:40.050606777 +0000 UTC m=+927.167659972" observedRunningTime="2026-01-03 04:31:41.353961393 +0000 UTC m=+928.471014578" watchObservedRunningTime="2026-01-03 04:31:41.356279866 +0000 UTC m=+928.473333051" Jan 03 04:31:42 crc kubenswrapper[4865]: I0103 04:31:42.214574 4865 generic.go:334] "Generic (PLEG): container finished" podID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerID="85f6eedb808bcfaeed09523b3cc8c6432e0773718ae2d6885ceb32fe7a4acfbe" exitCode=0 Jan 03 04:31:42 crc kubenswrapper[4865]: I0103 04:31:42.214640 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerDied","Data":"85f6eedb808bcfaeed09523b3cc8c6432e0773718ae2d6885ceb32fe7a4acfbe"} Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.222632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerStarted","Data":"99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9"} Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.637126 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2lcpm" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.667406 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6jxsz" podStartSLOduration=13.245133539 podStartE2EDuration="14.667372384s" podCreationTimestamp="2026-01-03 04:31:29 +0000 UTC" firstStartedPulling="2026-01-03 04:31:41.188640165 +0000 UTC m=+928.305693350" lastFinishedPulling="2026-01-03 04:31:42.61087901 +0000 UTC m=+929.727932195" observedRunningTime="2026-01-03 04:31:43.249074751 +0000 UTC m=+930.366127986" watchObservedRunningTime="2026-01-03 04:31:43.667372384 +0000 UTC m=+930.784425569" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.670643 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-sxcmb" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.685738 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-xk5t7" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.706076 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-8p5g8" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.748815 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-pk2cc" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.758030 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-wlwkg" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.786992 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-95md4" Jan 03 04:31:43 crc kubenswrapper[4865]: I0103 04:31:43.931894 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-d6psw" Jan 03 04:31:44 crc kubenswrapper[4865]: I0103 04:31:44.076923 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-2cfvd" Jan 03 04:31:44 crc kubenswrapper[4865]: I0103 04:31:44.112046 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-vp89g" Jan 03 04:31:44 crc kubenswrapper[4865]: I0103 04:31:44.144837 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-dt8d6" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.565016 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.572075 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7f03b5-6280-4cd8-b3a6-865329b1b9ce-cert\") pod \"infra-operator-controller-manager-648996cf74-xqj6p\" (UID: \"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce\") " pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.781937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.874853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.882845 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aae3d614-123b-48a1-81fa-84f2c04b3923-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq\" (UID: \"aae3d614-123b-48a1-81fa-84f2c04b3923\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.981525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.981627 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.985745 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-metrics-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.992370 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:31:45 crc kubenswrapper[4865]: I0103 04:31:45.998346 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fe531b80-38a8-4a91-95c3-cd9ffe4dee91-webhook-certs\") pod \"openstack-operator-controller-manager-54cff86f68-wmvwl\" (UID: \"fe531b80-38a8-4a91-95c3-cd9ffe4dee91\") " pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:46 crc kubenswrapper[4865]: I0103 04:31:46.147652 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:31:46 crc kubenswrapper[4865]: I0103 04:31:46.311225 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p"] Jan 03 04:31:46 crc kubenswrapper[4865]: I0103 04:31:46.450282 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq"] Jan 03 04:31:46 crc kubenswrapper[4865]: W0103 04:31:46.459310 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae3d614_123b_48a1_81fa_84f2c04b3923.slice/crio-79347b971a6dcf8cd51bb0d615461386e5e8048c76863e28d2c62892ea532e2a WatchSource:0}: Error finding container 79347b971a6dcf8cd51bb0d615461386e5e8048c76863e28d2c62892ea532e2a: Status 404 returned error can't find the container with id 79347b971a6dcf8cd51bb0d615461386e5e8048c76863e28d2c62892ea532e2a Jan 03 04:31:46 crc kubenswrapper[4865]: I0103 04:31:46.586608 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl"] Jan 03 04:31:46 crc kubenswrapper[4865]: W0103 04:31:46.593736 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe531b80_38a8_4a91_95c3_cd9ffe4dee91.slice/crio-c8b58201fe3b8fcacea86c4941930c28f089c05e6c48087cbb4f0fd7d5938d33 WatchSource:0}: Error finding container c8b58201fe3b8fcacea86c4941930c28f089c05e6c48087cbb4f0fd7d5938d33: Status 404 returned error can't find the container with id c8b58201fe3b8fcacea86c4941930c28f089c05e6c48087cbb4f0fd7d5938d33 Jan 03 04:31:47 crc kubenswrapper[4865]: I0103 04:31:47.254967 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" event={"ID":"fe531b80-38a8-4a91-95c3-cd9ffe4dee91","Type":"ContainerStarted","Data":"c8b58201fe3b8fcacea86c4941930c28f089c05e6c48087cbb4f0fd7d5938d33"} Jan 03 04:31:47 crc kubenswrapper[4865]: I0103 04:31:47.256233 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" event={"ID":"aae3d614-123b-48a1-81fa-84f2c04b3923","Type":"ContainerStarted","Data":"79347b971a6dcf8cd51bb0d615461386e5e8048c76863e28d2c62892ea532e2a"} Jan 03 04:31:47 crc kubenswrapper[4865]: I0103 04:31:47.257731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" event={"ID":"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce","Type":"ContainerStarted","Data":"8f4928777ad6f73a55da8fdf66f1aad664ed1133fb9e66e7827fc853efce721c"} Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.281794 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.283463 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.305664 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.315843 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.315906 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7c5\" (UniqueName: \"kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.315972 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.418305 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.418679 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7c5\" (UniqueName: \"kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.418856 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.419080 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.419296 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.446373 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7c5\" (UniqueName: \"kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5\") pod \"community-operators-q8rw4\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:48 crc kubenswrapper[4865]: I0103 04:31:48.640628 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:31:49 crc kubenswrapper[4865]: I0103 04:31:49.093286 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:31:49 crc kubenswrapper[4865]: I0103 04:31:49.285134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerStarted","Data":"11fe80eaf71f23787275e2d495a807edcf40e1e3619addb284dd5f664bf83840"} Jan 03 04:31:50 crc kubenswrapper[4865]: I0103 04:31:50.263313 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:50 crc kubenswrapper[4865]: I0103 04:31:50.263455 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:50 crc kubenswrapper[4865]: I0103 04:31:50.322222 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:50 crc kubenswrapper[4865]: I0103 04:31:50.378754 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:31:51 crc kubenswrapper[4865]: I0103 04:31:51.654440 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:31:52 crc kubenswrapper[4865]: I0103 04:31:52.310348 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6jxsz" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="registry-server" containerID="cri-o://99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" gracePeriod=2 Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.176982 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-8jc6g" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.191340 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gsncl" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.220220 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-zfntt" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.240552 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-9hsk5" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.270701 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-kgjdb" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.272904 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-cshl4" Jan 03 04:31:54 crc kubenswrapper[4865]: I0103 04:31:54.282364 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-fggfw" Jan 03 04:32:00 crc kubenswrapper[4865]: E0103 04:32:00.263418 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9 is running failed: container process not found" containerID="99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 04:32:00 crc kubenswrapper[4865]: E0103 04:32:00.264721 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9 is running failed: container process not found" containerID="99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 04:32:00 crc kubenswrapper[4865]: E0103 04:32:00.265029 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9 is running failed: container process not found" containerID="99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" cmd=["grpc_health_probe","-addr=:50051"] Jan 03 04:32:00 crc kubenswrapper[4865]: E0103 04:32:00.265364 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6jxsz" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="registry-server" Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.378971 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6jxsz_646ece76-4e13-4ce9-ac6a-71f27b883e30/registry-server/0.log" Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.379739 4865 generic.go:334] "Generic (PLEG): container finished" podID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerID="99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" exitCode=137 Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.379810 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerDied","Data":"99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9"} Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.381455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" event={"ID":"fe531b80-38a8-4a91-95c3-cd9ffe4dee91","Type":"ContainerStarted","Data":"7c449c4ad5b0570e644cfbc774024cc7d8c32a9a9d96c4c80c494016405b2c81"} Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.381656 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:32:00 crc kubenswrapper[4865]: I0103 04:32:00.439400 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" podStartSLOduration=47.439366146 podStartE2EDuration="47.439366146s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:32:00.437208678 +0000 UTC m=+947.554261863" watchObservedRunningTime="2026-01-03 04:32:00.439366146 +0000 UTC m=+947.556419331" Jan 03 04:32:06 crc kubenswrapper[4865]: I0103 04:32:06.156175 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54cff86f68-wmvwl" Jan 03 04:32:08 crc kubenswrapper[4865]: I0103 04:32:08.457167 4865 generic.go:334] "Generic (PLEG): container finished" podID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerID="b7544b6ee70532adb45521f3ebad0b7c62105610b3a8862e36f1cf94fe6b9c61" exitCode=0 Jan 03 04:32:08 crc kubenswrapper[4865]: I0103 04:32:08.457209 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerDied","Data":"b7544b6ee70532adb45521f3ebad0b7c62105610b3a8862e36f1cf94fe6b9c61"} Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.711826 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:32:09 crc kubenswrapper[4865]: E0103 04:32:09.721225 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.66:5001/openstack-k8s-operators/infra-operator:3090bb4bf81024488c215e64ddd66bc5d9e9eceb" Jan 03 04:32:09 crc kubenswrapper[4865]: E0103 04:32:09.721268 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.66:5001/openstack-k8s-operators/infra-operator:3090bb4bf81024488c215e64ddd66bc5d9e9eceb" Jan 03 04:32:09 crc kubenswrapper[4865]: E0103 04:32:09.721415 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.66:5001/openstack-k8s-operators/infra-operator:3090bb4bf81024488c215e64ddd66bc5d9e9eceb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-648996cf74-xqj6p_openstack-operators(ce7f03b5-6280-4cd8-b3a6-865329b1b9ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:32:09 crc kubenswrapper[4865]: E0103 04:32:09.723256 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" podUID="ce7f03b5-6280-4cd8-b3a6-865329b1b9ce" Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.800795 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6jxsz_646ece76-4e13-4ce9-ac6a-71f27b883e30/registry-server/0.log" Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.801591 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.902070 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcj49\" (UniqueName: \"kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49\") pod \"646ece76-4e13-4ce9-ac6a-71f27b883e30\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.902156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content\") pod \"646ece76-4e13-4ce9-ac6a-71f27b883e30\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.902192 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities\") pod \"646ece76-4e13-4ce9-ac6a-71f27b883e30\" (UID: \"646ece76-4e13-4ce9-ac6a-71f27b883e30\") " Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.903440 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities" (OuterVolumeSpecName: "utilities") pod "646ece76-4e13-4ce9-ac6a-71f27b883e30" (UID: "646ece76-4e13-4ce9-ac6a-71f27b883e30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.908823 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49" (OuterVolumeSpecName: "kube-api-access-dcj49") pod "646ece76-4e13-4ce9-ac6a-71f27b883e30" (UID: "646ece76-4e13-4ce9-ac6a-71f27b883e30"). InnerVolumeSpecName "kube-api-access-dcj49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:32:09 crc kubenswrapper[4865]: I0103 04:32:09.932258 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "646ece76-4e13-4ce9-ac6a-71f27b883e30" (UID: "646ece76-4e13-4ce9-ac6a-71f27b883e30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.003423 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcj49\" (UniqueName: \"kubernetes.io/projected/646ece76-4e13-4ce9-ac6a-71f27b883e30-kube-api-access-dcj49\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.003476 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.003493 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/646ece76-4e13-4ce9-ac6a-71f27b883e30-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.472793 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" event={"ID":"76d489d2-17da-4af8-8fc5-d8ce6451a45c","Type":"ContainerStarted","Data":"b7732e07e5cfe8447f321f1ebb95d49db45faa0f196a969a92e9972b17c69e8f"} Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.473221 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.475773 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6jxsz_646ece76-4e13-4ce9-ac6a-71f27b883e30/registry-server/0.log" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.476621 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jxsz" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.476684 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jxsz" event={"ID":"646ece76-4e13-4ce9-ac6a-71f27b883e30","Type":"ContainerDied","Data":"d21c69a0ddd93320d1325bce8cb666715429b705c8a85186982fa3ac27a4cbce"} Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.476840 4865 scope.go:117] "RemoveContainer" containerID="99366984bef8feb747726bd7c2ddbef6a7362af98b30b7165193ee36f49f39d9" Jan 03 04:32:10 crc kubenswrapper[4865]: E0103 04:32:10.477886 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.66:5001/openstack-k8s-operators/infra-operator:3090bb4bf81024488c215e64ddd66bc5d9e9eceb\\\"\"" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" podUID="ce7f03b5-6280-4cd8-b3a6-865329b1b9ce" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.495040 4865 scope.go:117] "RemoveContainer" containerID="85f6eedb808bcfaeed09523b3cc8c6432e0773718ae2d6885ceb32fe7a4acfbe" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.521344 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" podStartSLOduration=2.781050648 podStartE2EDuration="57.521327635s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:14.988960327 +0000 UTC m=+902.106013512" lastFinishedPulling="2026-01-03 04:32:09.729237324 +0000 UTC m=+956.846290499" observedRunningTime="2026-01-03 04:32:10.492684434 +0000 UTC m=+957.609737619" watchObservedRunningTime="2026-01-03 04:32:10.521327635 +0000 UTC m=+957.638380820" Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.524230 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.539871 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jxsz"] Jan 03 04:32:10 crc kubenswrapper[4865]: I0103 04:32:10.574107 4865 scope.go:117] "RemoveContainer" containerID="b7fce8ebf54cb8fe59317301c732e9cbe86c63ff29428e19b2c2d397e14acf93" Jan 03 04:32:11 crc kubenswrapper[4865]: I0103 04:32:11.172842 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" path="/var/lib/kubelet/pods/646ece76-4e13-4ce9-ac6a-71f27b883e30/volumes" Jan 03 04:32:11 crc kubenswrapper[4865]: I0103 04:32:11.484717 4865 generic.go:334] "Generic (PLEG): container finished" podID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerID="358b15e10c0ad3f97c181315446a22d6fb723badddf59858b5907de71ed8f22e" exitCode=0 Jan 03 04:32:11 crc kubenswrapper[4865]: I0103 04:32:11.484759 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerDied","Data":"358b15e10c0ad3f97c181315446a22d6fb723badddf59858b5907de71ed8f22e"} Jan 03 04:32:12 crc kubenswrapper[4865]: I0103 04:32:12.498197 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" event={"ID":"aae3d614-123b-48a1-81fa-84f2c04b3923","Type":"ContainerStarted","Data":"b00bae2a62f350cc718ce9c72028b4d6c9f6f4281bdd479c260942f807ffe865"} Jan 03 04:32:12 crc kubenswrapper[4865]: I0103 04:32:12.498335 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:32:12 crc kubenswrapper[4865]: I0103 04:32:12.534255 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" podStartSLOduration=34.161410689 podStartE2EDuration="59.534233291s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:46.46341869 +0000 UTC m=+933.580471875" lastFinishedPulling="2026-01-03 04:32:11.836241282 +0000 UTC m=+958.953294477" observedRunningTime="2026-01-03 04:32:12.528532747 +0000 UTC m=+959.645585942" watchObservedRunningTime="2026-01-03 04:32:12.534233291 +0000 UTC m=+959.651286486" Jan 03 04:32:13 crc kubenswrapper[4865]: I0103 04:32:13.509058 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerStarted","Data":"33239a92459a8cb65c17e290001377998330dd79b9989f4d49de20a3e8719549"} Jan 03 04:32:13 crc kubenswrapper[4865]: I0103 04:32:13.532660 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8rw4" podStartSLOduration=22.732409673 podStartE2EDuration="25.532632052s" podCreationTimestamp="2026-01-03 04:31:48 +0000 UTC" firstStartedPulling="2026-01-03 04:32:09.711467835 +0000 UTC m=+956.828521020" lastFinishedPulling="2026-01-03 04:32:12.511690204 +0000 UTC m=+959.628743399" observedRunningTime="2026-01-03 04:32:13.526165718 +0000 UTC m=+960.643218903" watchObservedRunningTime="2026-01-03 04:32:13.532632052 +0000 UTC m=+960.649685257" Jan 03 04:32:14 crc kubenswrapper[4865]: I0103 04:32:14.156984 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-fzx6g" Jan 03 04:32:18 crc kubenswrapper[4865]: I0103 04:32:18.641376 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:18 crc kubenswrapper[4865]: I0103 04:32:18.641980 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:18 crc kubenswrapper[4865]: I0103 04:32:18.720424 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:19 crc kubenswrapper[4865]: I0103 04:32:19.621545 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:19 crc kubenswrapper[4865]: I0103 04:32:19.956591 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:32:21 crc kubenswrapper[4865]: I0103 04:32:21.574874 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8rw4" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="registry-server" containerID="cri-o://33239a92459a8cb65c17e290001377998330dd79b9989f4d49de20a3e8719549" gracePeriod=2 Jan 03 04:32:22 crc kubenswrapper[4865]: I0103 04:32:22.584683 4865 generic.go:334] "Generic (PLEG): container finished" podID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerID="33239a92459a8cb65c17e290001377998330dd79b9989f4d49de20a3e8719549" exitCode=0 Jan 03 04:32:22 crc kubenswrapper[4865]: I0103 04:32:22.584747 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerDied","Data":"33239a92459a8cb65c17e290001377998330dd79b9989f4d49de20a3e8719549"} Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.113998 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.195095 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w7c5\" (UniqueName: \"kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5\") pod \"3bb8d158-3c52-4e72-85ac-92551a62c043\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.195161 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content\") pod \"3bb8d158-3c52-4e72-85ac-92551a62c043\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.195283 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities\") pod \"3bb8d158-3c52-4e72-85ac-92551a62c043\" (UID: \"3bb8d158-3c52-4e72-85ac-92551a62c043\") " Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.196450 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities" (OuterVolumeSpecName: "utilities") pod "3bb8d158-3c52-4e72-85ac-92551a62c043" (UID: "3bb8d158-3c52-4e72-85ac-92551a62c043"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.201660 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5" (OuterVolumeSpecName: "kube-api-access-4w7c5") pod "3bb8d158-3c52-4e72-85ac-92551a62c043" (UID: "3bb8d158-3c52-4e72-85ac-92551a62c043"). InnerVolumeSpecName "kube-api-access-4w7c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.242634 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bb8d158-3c52-4e72-85ac-92551a62c043" (UID: "3bb8d158-3c52-4e72-85ac-92551a62c043"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.297556 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.297589 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w7c5\" (UniqueName: \"kubernetes.io/projected/3bb8d158-3c52-4e72-85ac-92551a62c043-kube-api-access-4w7c5\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.297604 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb8d158-3c52-4e72-85ac-92551a62c043-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.596068 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8rw4" event={"ID":"3bb8d158-3c52-4e72-85ac-92551a62c043","Type":"ContainerDied","Data":"11fe80eaf71f23787275e2d495a807edcf40e1e3619addb284dd5f664bf83840"} Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.596145 4865 scope.go:117] "RemoveContainer" containerID="33239a92459a8cb65c17e290001377998330dd79b9989f4d49de20a3e8719549" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.596322 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8rw4" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.625282 4865 scope.go:117] "RemoveContainer" containerID="358b15e10c0ad3f97c181315446a22d6fb723badddf59858b5907de71ed8f22e" Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.663607 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.671957 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8rw4"] Jan 03 04:32:23 crc kubenswrapper[4865]: I0103 04:32:23.676327 4865 scope.go:117] "RemoveContainer" containerID="b7544b6ee70532adb45521f3ebad0b7c62105610b3a8862e36f1cf94fe6b9c61" Jan 03 04:32:25 crc kubenswrapper[4865]: I0103 04:32:25.170712 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" path="/var/lib/kubelet/pods/3bb8d158-3c52-4e72-85ac-92551a62c043/volumes" Jan 03 04:32:25 crc kubenswrapper[4865]: I0103 04:32:25.618324 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" event={"ID":"ce7f03b5-6280-4cd8-b3a6-865329b1b9ce","Type":"ContainerStarted","Data":"de942ce8802bcbd37a81f22921b75177ae9d4f24559e902bc00c94b7686b64b5"} Jan 03 04:32:25 crc kubenswrapper[4865]: I0103 04:32:25.618678 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:32:25 crc kubenswrapper[4865]: I0103 04:32:25.697352 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" podStartSLOduration=34.495602712 podStartE2EDuration="1m12.697327777s" podCreationTimestamp="2026-01-03 04:31:13 +0000 UTC" firstStartedPulling="2026-01-03 04:31:46.320753823 +0000 UTC m=+933.437807008" lastFinishedPulling="2026-01-03 04:32:24.522478848 +0000 UTC m=+971.639532073" observedRunningTime="2026-01-03 04:32:25.685788127 +0000 UTC m=+972.802841322" watchObservedRunningTime="2026-01-03 04:32:25.697327777 +0000 UTC m=+972.814381002" Jan 03 04:32:26 crc kubenswrapper[4865]: I0103 04:32:26.002104 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq" Jan 03 04:32:35 crc kubenswrapper[4865]: I0103 04:32:35.788390 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-648996cf74-xqj6p" Jan 03 04:32:40 crc kubenswrapper[4865]: I0103 04:32:40.740695 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:32:40 crc kubenswrapper[4865]: I0103 04:32:40.741327 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.875632 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876413 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="extract-utilities" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876445 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="extract-utilities" Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876460 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="extract-utilities" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876470 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="extract-utilities" Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876481 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876490 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876505 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="extract-content" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876514 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="extract-content" Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876527 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="extract-content" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876535 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="extract-content" Jan 03 04:32:52 crc kubenswrapper[4865]: E0103 04:32:52.876555 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876564 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876728 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb8d158-3c52-4e72-85ac-92551a62c043" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.876744 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="646ece76-4e13-4ce9-ac6a-71f27b883e30" containerName="registry-server" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.877661 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.881505 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-94gzz" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.881709 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.881883 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.882016 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.889733 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.953827 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.956311 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.958648 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 03 04:32:52 crc kubenswrapper[4865]: I0103 04:32:52.962865 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.047733 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.047781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42mb\" (UniqueName: \"kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.047815 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.047880 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.047940 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttgm\" (UniqueName: \"kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.149264 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wttgm\" (UniqueName: \"kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.149358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.149413 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s42mb\" (UniqueName: \"kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.149450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.149489 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.150627 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.150673 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.151144 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.172027 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wttgm\" (UniqueName: \"kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm\") pod \"dnsmasq-dns-675f4bcbfc-qndgj\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.174533 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s42mb\" (UniqueName: \"kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb\") pod \"dnsmasq-dns-78dd6ddcc-8g8cc\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.204731 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.272715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.682959 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:32:53 crc kubenswrapper[4865]: W0103 04:32:53.745068 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod185a9d11_e10d_46e7_827a_a2fb17f987af.slice/crio-996dbc334d2775c2e17cf302224ca37f4e91ba85fbd0528f64c7b94413795778 WatchSource:0}: Error finding container 996dbc334d2775c2e17cf302224ca37f4e91ba85fbd0528f64c7b94413795778: Status 404 returned error can't find the container with id 996dbc334d2775c2e17cf302224ca37f4e91ba85fbd0528f64c7b94413795778 Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.745974 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.846297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" event={"ID":"1edd1e0c-3ea2-4927-bf06-582f04d05ebd","Type":"ContainerStarted","Data":"17f25c6e9cf27c32f7bc6eab9b1c092f9b245a55091486a71c735fdd9cb72a62"} Jan 03 04:32:53 crc kubenswrapper[4865]: I0103 04:32:53.847619 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" event={"ID":"185a9d11-e10d-46e7-827a-a2fb17f987af","Type":"ContainerStarted","Data":"996dbc334d2775c2e17cf302224ca37f4e91ba85fbd0528f64c7b94413795778"} Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.676022 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.681185 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.684472 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.692054 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.788552 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.789115 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.789204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8s8k\" (UniqueName: \"kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.892863 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.902182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8s8k\" (UniqueName: \"kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.902266 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.902303 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.903166 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.903831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.909450 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.912049 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.925826 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:32:55 crc kubenswrapper[4865]: I0103 04:32:55.947808 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8s8k\" (UniqueName: \"kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k\") pod \"dnsmasq-dns-666b6646f7-7wqzw\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.007977 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.105935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjws\" (UniqueName: \"kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.106042 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.106068 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.207098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfjws\" (UniqueName: \"kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.207556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.207583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.208647 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.208950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.224925 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfjws\" (UniqueName: \"kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws\") pod \"dnsmasq-dns-57d769cc4f-29qbb\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.239655 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.436837 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:32:56 crc kubenswrapper[4865]: W0103 04:32:56.446459 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5041ca65_7cb4_42a2_aec5_fa278b0d6a7c.slice/crio-886de50073786ceea30e1d167f43fd198d203290809e030fefe9a68f4abb8672 WatchSource:0}: Error finding container 886de50073786ceea30e1d167f43fd198d203290809e030fefe9a68f4abb8672: Status 404 returned error can't find the container with id 886de50073786ceea30e1d167f43fd198d203290809e030fefe9a68f4abb8672 Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.660796 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.803255 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.807238 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809356 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809632 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809869 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809888 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809925 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.809979 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xqmxl" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.810643 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.814797 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.876726 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" event={"ID":"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c","Type":"ContainerStarted","Data":"886de50073786ceea30e1d167f43fd198d203290809e030fefe9a68f4abb8672"} Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.915997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916054 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916146 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916168 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bv9\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916450 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916617 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916644 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:56 crc kubenswrapper[4865]: I0103 04:32:56.916732 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.017884 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.017940 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.017959 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.017974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bv9\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018030 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018108 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018243 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018271 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018292 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018308 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018903 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.018912 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.019016 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.019561 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.020224 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.021937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.022549 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.037528 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.039996 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.048168 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.049052 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bv9\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.050792 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.053806 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.054402 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.054740 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.054931 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.055029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.058517 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.059805 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.060017 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2z5mc" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.087152 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.133366 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.221786 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.221831 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.221851 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9622l\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.221883 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.221959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222207 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222281 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222325 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.222361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323733 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9622l\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323761 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323801 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323829 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.323880 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.324462 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.324539 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.324597 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.324671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.325225 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.325337 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.327885 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.332861 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.338571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.338808 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.338916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.339458 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.350268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.365154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9622l\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:57 crc kubenswrapper[4865]: I0103 04:32:57.421866 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.452151 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.456989 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.459150 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.459348 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.459432 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2qb22" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.459858 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.462177 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.465349 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653402 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653454 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-default\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653531 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653636 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-kolla-config\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653693 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjnrq\" (UniqueName: \"kubernetes.io/projected/2eba40ea-25a4-4887-aa6c-7feb32b91491-kube-api-access-gjnrq\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653716 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.653768 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-kolla-config\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjnrq\" (UniqueName: \"kubernetes.io/projected/2eba40ea-25a4-4887-aa6c-7feb32b91491-kube-api-access-gjnrq\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755126 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755153 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755193 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-default\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755225 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755369 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.755844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.756157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-kolla-config\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.756565 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-config-data-default\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.757271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eba40ea-25a4-4887-aa6c-7feb32b91491-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.762563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.766370 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eba40ea-25a4-4887-aa6c-7feb32b91491-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.773354 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjnrq\" (UniqueName: \"kubernetes.io/projected/2eba40ea-25a4-4887-aa6c-7feb32b91491-kube-api-access-gjnrq\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.775797 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"2eba40ea-25a4-4887-aa6c-7feb32b91491\") " pod="openstack/openstack-galera-0" Jan 03 04:32:58 crc kubenswrapper[4865]: I0103 04:32:58.779644 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.820733 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.824254 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.827646 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.827673 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.827905 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-l59qx" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.828062 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.829091 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.900124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" event={"ID":"6f490453-952c-44f5-b3d1-0b7145123630","Type":"ContainerStarted","Data":"a05a244fe1275070378d4864b989de476dc0e600b88d0cae29bba69a614c0f4f"} Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.979806 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.979868 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.979895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.980267 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.980310 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kube-api-access-gbwc4\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.980420 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.980446 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:32:59 crc kubenswrapper[4865]: I0103 04:32:59.980500 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083334 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083492 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kube-api-access-gbwc4\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083533 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.083556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.084773 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.085426 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.085737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.087489 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.088006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/20843cac-0ba6-4f8f-b767-dd61fdb4f160-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.089176 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.093672 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20843cac-0ba6-4f8f-b767-dd61fdb4f160-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.105008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/20843cac-0ba6-4f8f-b767-dd61fdb4f160-kube-api-access-gbwc4\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.109514 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"20843cac-0ba6-4f8f-b767-dd61fdb4f160\") " pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.147169 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.218933 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.219815 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.221442 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.221576 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sfvjp" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.225191 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.235170 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.390466 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-config-data\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.390571 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ktj\" (UniqueName: \"kubernetes.io/projected/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kube-api-access-x9ktj\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.390621 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.390637 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kolla-config\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.390674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.493997 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.494051 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-config-data\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.494098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ktj\" (UniqueName: \"kubernetes.io/projected/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kube-api-access-x9ktj\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.494143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.494163 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kolla-config\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.495056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-config-data\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.497028 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kolla-config\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.498141 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.499267 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31864768-e1b4-438d-b88d-a5a8f9e89e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.511623 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ktj\" (UniqueName: \"kubernetes.io/projected/31864768-e1b4-438d-b88d-a5a8f9e89e5e-kube-api-access-x9ktj\") pod \"memcached-0\" (UID: \"31864768-e1b4-438d-b88d-a5a8f9e89e5e\") " pod="openstack/memcached-0" Jan 03 04:33:00 crc kubenswrapper[4865]: I0103 04:33:00.534842 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.493865 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.495257 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.502016 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.504269 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-smf5r" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.623610 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95rc\" (UniqueName: \"kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc\") pod \"kube-state-metrics-0\" (UID: \"3c06d4fd-0e97-401a-a450-92e7e1c22131\") " pod="openstack/kube-state-metrics-0" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.725513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95rc\" (UniqueName: \"kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc\") pod \"kube-state-metrics-0\" (UID: \"3c06d4fd-0e97-401a-a450-92e7e1c22131\") " pod="openstack/kube-state-metrics-0" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.741922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95rc\" (UniqueName: \"kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc\") pod \"kube-state-metrics-0\" (UID: \"3c06d4fd-0e97-401a-a450-92e7e1c22131\") " pod="openstack/kube-state-metrics-0" Jan 03 04:33:02 crc kubenswrapper[4865]: I0103 04:33:02.823109 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.730474 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gpwwp"] Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.731772 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.744055 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gpwwp"] Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.744348 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x2h2c" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.744586 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.744736 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.754276 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5hbf4"] Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.767617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.793307 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5hbf4"] Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.873677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-etc-ovs\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.873725 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgps5\" (UniqueName: \"kubernetes.io/projected/334ea42d-9265-43f9-8c4c-fdf516746069-kube-api-access-jgps5\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.873746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334ea42d-9265-43f9-8c4c-fdf516746069-scripts\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.873924 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-lib\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.873972 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjgg\" (UniqueName: \"kubernetes.io/projected/5dc49e44-6dba-457d-b535-41a724d9640f-kube-api-access-zsjgg\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874035 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-log-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874081 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dc49e44-6dba-457d-b535-41a724d9640f-scripts\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-combined-ca-bundle\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874221 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-run\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874286 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-log\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.874348 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-ovn-controller-tls-certs\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975761 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dc49e44-6dba-457d-b535-41a724d9640f-scripts\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975813 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-combined-ca-bundle\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975854 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975885 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-run\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-log\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.975972 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-ovn-controller-tls-certs\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976779 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-run-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976791 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-etc-ovs\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334ea42d-9265-43f9-8c4c-fdf516746069-scripts\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-run\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgps5\" (UniqueName: \"kubernetes.io/projected/334ea42d-9265-43f9-8c4c-fdf516746069-kube-api-access-jgps5\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976905 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-lib\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976930 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjgg\" (UniqueName: \"kubernetes.io/projected/5dc49e44-6dba-457d-b535-41a724d9640f-kube-api-access-zsjgg\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976969 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-log-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-etc-ovs\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.976961 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-log\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.977195 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5dc49e44-6dba-457d-b535-41a724d9640f-var-lib\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.977521 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334ea42d-9265-43f9-8c4c-fdf516746069-var-log-ovn\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.978753 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dc49e44-6dba-457d-b535-41a724d9640f-scripts\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.987002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-ovn-controller-tls-certs\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.987670 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334ea42d-9265-43f9-8c4c-fdf516746069-combined-ca-bundle\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.996344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334ea42d-9265-43f9-8c4c-fdf516746069-scripts\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.998944 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgps5\" (UniqueName: \"kubernetes.io/projected/334ea42d-9265-43f9-8c4c-fdf516746069-kube-api-access-jgps5\") pod \"ovn-controller-gpwwp\" (UID: \"334ea42d-9265-43f9-8c4c-fdf516746069\") " pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:05 crc kubenswrapper[4865]: I0103 04:33:05.999638 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjgg\" (UniqueName: \"kubernetes.io/projected/5dc49e44-6dba-457d-b535-41a724d9640f-kube-api-access-zsjgg\") pod \"ovn-controller-ovs-5hbf4\" (UID: \"5dc49e44-6dba-457d-b535-41a724d9640f\") " pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.056249 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.087474 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.616097 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.617442 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.621234 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.621811 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-v4wtv" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.621951 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.622540 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.623323 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.637270 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686291 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfxg\" (UniqueName: \"kubernetes.io/projected/4b281b80-3b3a-4c04-a904-669d66ec4a74-kube-api-access-dcfxg\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686343 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686453 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.686714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-config\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.787758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.787846 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.787870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.787902 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.787922 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-config\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.788457 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.788483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfxg\" (UniqueName: \"kubernetes.io/projected/4b281b80-3b3a-4c04-a904-669d66ec4a74-kube-api-access-dcfxg\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.788547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.788756 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.788982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.789862 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-config\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.790255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b281b80-3b3a-4c04-a904-669d66ec4a74-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.792257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.795230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.799009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b281b80-3b3a-4c04-a904-669d66ec4a74-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.807221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfxg\" (UniqueName: \"kubernetes.io/projected/4b281b80-3b3a-4c04-a904-669d66ec4a74-kube-api-access-dcfxg\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.808695 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4b281b80-3b3a-4c04-a904-669d66ec4a74\") " pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:06 crc kubenswrapper[4865]: I0103 04:33:06.933158 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.872001 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.872406 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wttgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-qndgj_openstack(1edd1e0c-3ea2-4927-bf06-582f04d05ebd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.873614 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" podUID="1edd1e0c-3ea2-4927-bf06-582f04d05ebd" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.907116 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.907292 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s42mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8g8cc_openstack(185a9d11-e10d-46e7-827a-a2fb17f987af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:08 crc kubenswrapper[4865]: E0103 04:33:08.908567 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" podUID="185a9d11-e10d-46e7-827a-a2fb17f987af" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.277924 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.279837 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.285710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.285942 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.285963 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.286169 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2nj87" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.304555 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426344 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426426 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426558 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.426973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.427030 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz25j\" (UniqueName: \"kubernetes.io/projected/c9d25819-b14f-411e-a158-b9f315cf13d6-kube-api-access-hz25j\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.486762 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.494875 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.528895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.528949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz25j\" (UniqueName: \"kubernetes.io/projected/c9d25819-b14f-411e-a158-b9f315cf13d6-kube-api-access-hz25j\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529031 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.529624 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.530271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.530570 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.532794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9d25819-b14f-411e-a158-b9f315cf13d6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.538858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.539330 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.548576 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9d25819-b14f-411e-a158-b9f315cf13d6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.550118 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz25j\" (UniqueName: \"kubernetes.io/projected/c9d25819-b14f-411e-a158-b9f315cf13d6-kube-api-access-hz25j\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.556939 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.565556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9d25819-b14f-411e-a158-b9f315cf13d6\") " pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.611008 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.631483 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s42mb\" (UniqueName: \"kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb\") pod \"185a9d11-e10d-46e7-827a-a2fb17f987af\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.631590 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config\") pod \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.631618 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc\") pod \"185a9d11-e10d-46e7-827a-a2fb17f987af\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.631662 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config\") pod \"185a9d11-e10d-46e7-827a-a2fb17f987af\" (UID: \"185a9d11-e10d-46e7-827a-a2fb17f987af\") " Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.631688 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wttgm\" (UniqueName: \"kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm\") pod \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\" (UID: \"1edd1e0c-3ea2-4927-bf06-582f04d05ebd\") " Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.632746 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config" (OuterVolumeSpecName: "config") pod "1edd1e0c-3ea2-4927-bf06-582f04d05ebd" (UID: "1edd1e0c-3ea2-4927-bf06-582f04d05ebd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.633298 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "185a9d11-e10d-46e7-827a-a2fb17f987af" (UID: "185a9d11-e10d-46e7-827a-a2fb17f987af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.633411 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config" (OuterVolumeSpecName: "config") pod "185a9d11-e10d-46e7-827a-a2fb17f987af" (UID: "185a9d11-e10d-46e7-827a-a2fb17f987af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.635365 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm" (OuterVolumeSpecName: "kube-api-access-wttgm") pod "1edd1e0c-3ea2-4927-bf06-582f04d05ebd" (UID: "1edd1e0c-3ea2-4927-bf06-582f04d05ebd"). InnerVolumeSpecName "kube-api-access-wttgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.635545 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb" (OuterVolumeSpecName: "kube-api-access-s42mb") pod "185a9d11-e10d-46e7-827a-a2fb17f987af" (UID: "185a9d11-e10d-46e7-827a-a2fb17f987af"). InnerVolumeSpecName "kube-api-access-s42mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.659717 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 03 04:33:09 crc kubenswrapper[4865]: W0103 04:33:09.669277 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eba40ea_25a4_4887_aa6c_7feb32b91491.slice/crio-ee6385e22cb17cf0d86c2d5dabc5366751d50d4cd4dd8a360fa11fa1455702af WatchSource:0}: Error finding container ee6385e22cb17cf0d86c2d5dabc5366751d50d4cd4dd8a360fa11fa1455702af: Status 404 returned error can't find the container with id ee6385e22cb17cf0d86c2d5dabc5366751d50d4cd4dd8a360fa11fa1455702af Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.674410 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.739463 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s42mb\" (UniqueName: \"kubernetes.io/projected/185a9d11-e10d-46e7-827a-a2fb17f987af-kube-api-access-s42mb\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.739750 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.739760 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.739770 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185a9d11-e10d-46e7-827a-a2fb17f987af-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:09 crc kubenswrapper[4865]: I0103 04:33:09.739779 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wttgm\" (UniqueName: \"kubernetes.io/projected/1edd1e0c-3ea2-4927-bf06-582f04d05ebd-kube-api-access-wttgm\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.003085 4865 generic.go:334] "Generic (PLEG): container finished" podID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerID="f400667168c3fc3b698e40c90ed8d5007fecfda3332fd082b1f229d3c0c5d8f2" exitCode=0 Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.003181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" event={"ID":"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c","Type":"ContainerDied","Data":"f400667168c3fc3b698e40c90ed8d5007fecfda3332fd082b1f229d3c0c5d8f2"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.009093 4865 generic.go:334] "Generic (PLEG): container finished" podID="6f490453-952c-44f5-b3d1-0b7145123630" containerID="afb9eb00f849d9a9d07db69bd94da92b987e385b60dddd5fd464f2dd18df92da" exitCode=0 Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.009151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" event={"ID":"6f490453-952c-44f5-b3d1-0b7145123630","Type":"ContainerDied","Data":"afb9eb00f849d9a9d07db69bd94da92b987e385b60dddd5fd464f2dd18df92da"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.025374 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" event={"ID":"1edd1e0c-3ea2-4927-bf06-582f04d05ebd","Type":"ContainerDied","Data":"17f25c6e9cf27c32f7bc6eab9b1c092f9b245a55091486a71c735fdd9cb72a62"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.025563 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qndgj" Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.032497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" event={"ID":"185a9d11-e10d-46e7-827a-a2fb17f987af","Type":"ContainerDied","Data":"996dbc334d2775c2e17cf302224ca37f4e91ba85fbd0528f64c7b94413795778"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.032623 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8g8cc" Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.054334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerStarted","Data":"4c0b2b0175122e6030c390633bb3281dc42a8f9ec5f48757fe73bc85d93913df"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.058681 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.066838 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerStarted","Data":"aa435c949996a4f4d334a4d30973757d8b21906cf17326c6816c56a8bcc3f632"} Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.071023 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gpwwp"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.073577 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2eba40ea-25a4-4887-aa6c-7feb32b91491","Type":"ContainerStarted","Data":"ee6385e22cb17cf0d86c2d5dabc5366751d50d4cd4dd8a360fa11fa1455702af"} Jan 03 04:33:10 crc kubenswrapper[4865]: W0103 04:33:10.080560 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod334ea42d_9265_43f9_8c4c_fdf516746069.slice/crio-221db98c7e01fae47396fe97ea9a26d12d6dec1b8081ae84576389473b78bc11 WatchSource:0}: Error finding container 221db98c7e01fae47396fe97ea9a26d12d6dec1b8081ae84576389473b78bc11: Status 404 returned error can't find the container with id 221db98c7e01fae47396fe97ea9a26d12d6dec1b8081ae84576389473b78bc11 Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.086026 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 03 04:33:10 crc kubenswrapper[4865]: W0103 04:33:10.108166 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20843cac_0ba6_4f8f_b767_dd61fdb4f160.slice/crio-99a4c376d2e1c289061bfad5b99db9ea655bfaa4f6fbf3767ee2d24f0e38c38c WatchSource:0}: Error finding container 99a4c376d2e1c289061bfad5b99db9ea655bfaa4f6fbf3767ee2d24f0e38c38c: Status 404 returned error can't find the container with id 99a4c376d2e1c289061bfad5b99db9ea655bfaa4f6fbf3767ee2d24f0e38c38c Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.115930 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:33:10 crc kubenswrapper[4865]: W0103 04:33:10.123764 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c06d4fd_0e97_401a_a450_92e7e1c22131.slice/crio-e0b15078b7f157c77ce52769bb9990e0922d5c79cbbf35562c0a796d6a0c8242 WatchSource:0}: Error finding container e0b15078b7f157c77ce52769bb9990e0922d5c79cbbf35562c0a796d6a0c8242: Status 404 returned error can't find the container with id e0b15078b7f157c77ce52769bb9990e0922d5c79cbbf35562c0a796d6a0c8242 Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.189872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5hbf4"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.218788 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.229827 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8g8cc"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.242213 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.248366 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qndgj"] Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.281279 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 03 04:33:10 crc kubenswrapper[4865]: W0103 04:33:10.284781 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b281b80_3b3a_4c04_a904_669d66ec4a74.slice/crio-bef6da60140953f5e08b98e9c149a638df3e42786db188d5719c57096863a182 WatchSource:0}: Error finding container bef6da60140953f5e08b98e9c149a638df3e42786db188d5719c57096863a182: Status 404 returned error can't find the container with id bef6da60140953f5e08b98e9c149a638df3e42786db188d5719c57096863a182 Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.739986 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.740057 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:33:10 crc kubenswrapper[4865]: I0103 04:33:10.968670 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.082511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" event={"ID":"6f490453-952c-44f5-b3d1-0b7145123630","Type":"ContainerStarted","Data":"da4a2dbe946df3922e014729b9aefcc572e1961cd52437f2674cc832140daac6"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.083181 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.083759 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4b281b80-3b3a-4c04-a904-669d66ec4a74","Type":"ContainerStarted","Data":"bef6da60140953f5e08b98e9c149a638df3e42786db188d5719c57096863a182"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.085025 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5hbf4" event={"ID":"5dc49e44-6dba-457d-b535-41a724d9640f","Type":"ContainerStarted","Data":"657282f45f7ea43eefd1099901f34669f53e696bd23c4e540e9960344abb46de"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.086260 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31864768-e1b4-438d-b88d-a5a8f9e89e5e","Type":"ContainerStarted","Data":"754e0751ad42671f75f3717640e0c87dc759c6b25ef5a6c210882eede584b506"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.087489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gpwwp" event={"ID":"334ea42d-9265-43f9-8c4c-fdf516746069","Type":"ContainerStarted","Data":"221db98c7e01fae47396fe97ea9a26d12d6dec1b8081ae84576389473b78bc11"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.088690 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"20843cac-0ba6-4f8f-b767-dd61fdb4f160","Type":"ContainerStarted","Data":"99a4c376d2e1c289061bfad5b99db9ea655bfaa4f6fbf3767ee2d24f0e38c38c"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.089983 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c06d4fd-0e97-401a-a450-92e7e1c22131","Type":"ContainerStarted","Data":"e0b15078b7f157c77ce52769bb9990e0922d5c79cbbf35562c0a796d6a0c8242"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.092113 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" event={"ID":"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c","Type":"ContainerStarted","Data":"7cf990c36612fc03e043dd2914c34a1ce96765cda7b2cc71edf393ec51c6371d"} Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.092361 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.107163 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" podStartSLOduration=6.17522491 podStartE2EDuration="16.107148693s" podCreationTimestamp="2026-01-03 04:32:55 +0000 UTC" firstStartedPulling="2026-01-03 04:32:59.118099894 +0000 UTC m=+1006.235153069" lastFinishedPulling="2026-01-03 04:33:09.050023667 +0000 UTC m=+1016.167076852" observedRunningTime="2026-01-03 04:33:11.10145112 +0000 UTC m=+1018.218504305" watchObservedRunningTime="2026-01-03 04:33:11.107148693 +0000 UTC m=+1018.224201878" Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.120655 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" podStartSLOduration=3.523580539 podStartE2EDuration="16.120635926s" podCreationTimestamp="2026-01-03 04:32:55 +0000 UTC" firstStartedPulling="2026-01-03 04:32:56.451073328 +0000 UTC m=+1003.568126523" lastFinishedPulling="2026-01-03 04:33:09.048128725 +0000 UTC m=+1016.165181910" observedRunningTime="2026-01-03 04:33:11.116583047 +0000 UTC m=+1018.233636252" watchObservedRunningTime="2026-01-03 04:33:11.120635926 +0000 UTC m=+1018.237689111" Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.165890 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185a9d11-e10d-46e7-827a-a2fb17f987af" path="/var/lib/kubelet/pods/185a9d11-e10d-46e7-827a-a2fb17f987af/volumes" Jan 03 04:33:11 crc kubenswrapper[4865]: I0103 04:33:11.166373 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edd1e0c-3ea2-4927-bf06-582f04d05ebd" path="/var/lib/kubelet/pods/1edd1e0c-3ea2-4927-bf06-582f04d05ebd/volumes" Jan 03 04:33:13 crc kubenswrapper[4865]: W0103 04:33:13.866861 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9d25819_b14f_411e_a158_b9f315cf13d6.slice/crio-aaa362d36c0ec8407904f694632997c76a39276f6d28a7ee1325b38b95c5cd7c WatchSource:0}: Error finding container aaa362d36c0ec8407904f694632997c76a39276f6d28a7ee1325b38b95c5cd7c: Status 404 returned error can't find the container with id aaa362d36c0ec8407904f694632997c76a39276f6d28a7ee1325b38b95c5cd7c Jan 03 04:33:14 crc kubenswrapper[4865]: I0103 04:33:14.112638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9d25819-b14f-411e-a158-b9f315cf13d6","Type":"ContainerStarted","Data":"aaa362d36c0ec8407904f694632997c76a39276f6d28a7ee1325b38b95c5cd7c"} Jan 03 04:33:16 crc kubenswrapper[4865]: I0103 04:33:16.038561 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:33:16 crc kubenswrapper[4865]: I0103 04:33:16.242558 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:33:16 crc kubenswrapper[4865]: I0103 04:33:16.291684 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:33:16 crc kubenswrapper[4865]: I0103 04:33:16.291878 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" containerID="cri-o://7cf990c36612fc03e043dd2914c34a1ce96765cda7b2cc71edf393ec51c6371d" gracePeriod=10 Jan 03 04:33:17 crc kubenswrapper[4865]: I0103 04:33:17.135247 4865 generic.go:334] "Generic (PLEG): container finished" podID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerID="7cf990c36612fc03e043dd2914c34a1ce96765cda7b2cc71edf393ec51c6371d" exitCode=0 Jan 03 04:33:17 crc kubenswrapper[4865]: I0103 04:33:17.135338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" event={"ID":"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c","Type":"ContainerDied","Data":"7cf990c36612fc03e043dd2914c34a1ce96765cda7b2cc71edf393ec51c6371d"} Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.615744 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qv9sm"] Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.617094 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.618875 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.676826 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qv9sm"] Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716258 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-combined-ca-bundle\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716365 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ch59\" (UniqueName: \"kubernetes.io/projected/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-kube-api-access-6ch59\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716443 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716475 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-config\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovn-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.716632 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovs-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.817929 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovn-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818005 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovs-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818028 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-combined-ca-bundle\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818110 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ch59\" (UniqueName: \"kubernetes.io/projected/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-kube-api-access-6ch59\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818197 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-config\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.818883 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-config\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.822680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovn-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.822680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-ovs-rundir\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.824995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-combined-ca-bundle\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.837105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ch59\" (UniqueName: \"kubernetes.io/projected/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-kube-api-access-6ch59\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.848948 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7da9b69-55d5-43a2-8e3c-2a25ca513ce6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qv9sm\" (UID: \"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6\") " pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.878409 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.879914 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.881809 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.900185 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.919870 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.919947 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.919977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgnc\" (UniqueName: \"kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.920002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:18 crc kubenswrapper[4865]: I0103 04:33:18.981268 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qv9sm" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.021521 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.021642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.021689 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.021714 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhgnc\" (UniqueName: \"kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.022582 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.022609 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.022690 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.046303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhgnc\" (UniqueName: \"kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc\") pod \"dnsmasq-dns-7f896c8c65-vtk25\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.110509 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.111122 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.128608 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.129822 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.133055 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.145343 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.226265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.226375 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcnjg\" (UniqueName: \"kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.226431 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.226469 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.226529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.328386 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcnjg\" (UniqueName: \"kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.328466 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.328496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.328555 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.328608 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.329408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.329581 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.329950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.330140 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.350574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcnjg\" (UniqueName: \"kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg\") pod \"dnsmasq-dns-86db49b7ff-nm5cw\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:19 crc kubenswrapper[4865]: I0103 04:33:19.444538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:21 crc kubenswrapper[4865]: I0103 04:33:21.009238 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Jan 03 04:33:26 crc kubenswrapper[4865]: I0103 04:33:26.010012 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.496892 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.497408 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n689h55dh64ch67h68ch5bh7ch5b9h67bh5b8h676h59h5dfhf5h56ch55h6dh5bh5b4h68dhc5h595h544h5bch595h5dfh89h654h5fch76h66ch84q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsjgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-5hbf4_openstack(5dc49e44-6dba-457d-b535-41a724d9640f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.498733 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-5hbf4" podUID="5dc49e44-6dba-457d-b535-41a724d9640f" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.531731 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.532014 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjnrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(2eba40ea-25a4-4887-aa6c-7feb32b91491): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.533425 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="2eba40ea-25a4-4887-aa6c-7feb32b91491" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.575114 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.575337 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(20843cac-0ba6-4f8f-b767-dd61fdb4f160): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.576583 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="20843cac-0ba6-4f8f-b767-dd61fdb4f160" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.756550 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.756743 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n689h55dh64ch67h68ch5bh7ch5b9h67bh5b8h676h59h5dfhf5h56ch55h6dh5bh5b4h68dhc5h595h544h5bch595h5dfh89h654h5fch76h66ch84q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgps5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-gpwwp_openstack(334ea42d-9265-43f9-8c4c-fdf516746069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:29 crc kubenswrapper[4865]: E0103 04:33:29.757855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.126992 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.266609 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config\") pod \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.266889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc\") pod \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.267011 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8s8k\" (UniqueName: \"kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k\") pod \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\" (UID: \"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c\") " Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.272525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k" (OuterVolumeSpecName: "kube-api-access-d8s8k") pod "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" (UID: "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c"). InnerVolumeSpecName "kube-api-access-d8s8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.274680 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.274679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-7wqzw" event={"ID":"5041ca65-7cb4-42a2-aec5-fa278b0d6a7c","Type":"ContainerDied","Data":"886de50073786ceea30e1d167f43fd198d203290809e030fefe9a68f4abb8672"} Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.275010 4865 scope.go:117] "RemoveContainer" containerID="7cf990c36612fc03e043dd2914c34a1ce96765cda7b2cc71edf393ec51c6371d" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.276040 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-5hbf4" podUID="5dc49e44-6dba-457d-b535-41a724d9640f" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.276648 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.276701 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="20843cac-0ba6-4f8f-b767-dd61fdb4f160" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.277600 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="2eba40ea-25a4-4887-aa6c-7feb32b91491" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.315112 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" (UID: "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.327695 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config" (OuterVolumeSpecName: "config") pod "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" (UID: "5041ca65-7cb4-42a2-aec5-fa278b0d6a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.368861 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.368907 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8s8k\" (UniqueName: \"kubernetes.io/projected/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-kube-api-access-d8s8k\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.369016 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.546499 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Jan 03 04:33:30 crc kubenswrapper[4865]: E0103 04:33:30.546785 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n689h647hd6h59fhcchd6h84h66ch88h659h548h5c4hdch9dh5bdh5b8h585h669h568h59dh699hdbhbbh5f5h695h658h65bhc9hdfhf9h74h686q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hz25j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(c9d25819-b14f-411e-a158-b9f315cf13d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.617263 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.622887 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-7wqzw"] Jan 03 04:33:30 crc kubenswrapper[4865]: I0103 04:33:30.922954 4865 scope.go:117] "RemoveContainer" containerID="f400667168c3fc3b698e40c90ed8d5007fecfda3332fd082b1f229d3c0c5d8f2" Jan 03 04:33:31 crc kubenswrapper[4865]: I0103 04:33:31.166324 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" path="/var/lib/kubelet/pods/5041ca65-7cb4-42a2-aec5-fa278b0d6a7c/volumes" Jan 03 04:33:31 crc kubenswrapper[4865]: I0103 04:33:31.192609 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:31 crc kubenswrapper[4865]: I0103 04:33:31.333859 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:31 crc kubenswrapper[4865]: I0103 04:33:31.422299 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qv9sm"] Jan 03 04:33:31 crc kubenswrapper[4865]: W0103 04:33:31.605051 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ed5a1e_325f_4069_b4a7_a6ff01925b57.slice/crio-5252da2cd787332c8df6875355a0e58b3960d979026abd6f25ee870ad8c0cb78 WatchSource:0}: Error finding container 5252da2cd787332c8df6875355a0e58b3960d979026abd6f25ee870ad8c0cb78: Status 404 returned error can't find the container with id 5252da2cd787332c8df6875355a0e58b3960d979026abd6f25ee870ad8c0cb78 Jan 03 04:33:31 crc kubenswrapper[4865]: E0103 04:33:31.613916 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 03 04:33:31 crc kubenswrapper[4865]: E0103 04:33:31.613965 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 03 04:33:31 crc kubenswrapper[4865]: E0103 04:33:31.614092 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z95rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(3c06d4fd-0e97-401a-a450-92e7e1c22131): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 04:33:31 crc kubenswrapper[4865]: E0103 04:33:31.615271 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" Jan 03 04:33:31 crc kubenswrapper[4865]: W0103 04:33:31.623900 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7da9b69_55d5_43a2_8e3c_2a25ca513ce6.slice/crio-2bdd296f2dd1137f908f82a9da2ca5e516c4cf56caa846ee60f11d3bdb162a82 WatchSource:0}: Error finding container 2bdd296f2dd1137f908f82a9da2ca5e516c4cf56caa846ee60f11d3bdb162a82: Status 404 returned error can't find the container with id 2bdd296f2dd1137f908f82a9da2ca5e516c4cf56caa846ee60f11d3bdb162a82 Jan 03 04:33:31 crc kubenswrapper[4865]: W0103 04:33:31.628815 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c3a1c46_1e23_4ba2_a026_c712d9599436.slice/crio-9fc6e7241182fc806c409912e2db0ea3c21321b06214f1213be4efaba3557eb8 WatchSource:0}: Error finding container 9fc6e7241182fc806c409912e2db0ea3c21321b06214f1213be4efaba3557eb8: Status 404 returned error can't find the container with id 9fc6e7241182fc806c409912e2db0ea3c21321b06214f1213be4efaba3557eb8 Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.291083 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qv9sm" event={"ID":"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6","Type":"ContainerStarted","Data":"2bdd296f2dd1137f908f82a9da2ca5e516c4cf56caa846ee60f11d3bdb162a82"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.292875 4865 generic.go:334] "Generic (PLEG): container finished" podID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerID="040dda61a21c0b8cbad4154ae28d5b2c43136205e6a3da1f50366d92f8b39ded" exitCode=0 Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.292945 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" event={"ID":"0c3a1c46-1e23-4ba2-a026-c712d9599436","Type":"ContainerDied","Data":"040dda61a21c0b8cbad4154ae28d5b2c43136205e6a3da1f50366d92f8b39ded"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.292974 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" event={"ID":"0c3a1c46-1e23-4ba2-a026-c712d9599436","Type":"ContainerStarted","Data":"9fc6e7241182fc806c409912e2db0ea3c21321b06214f1213be4efaba3557eb8"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.295995 4865 generic.go:334] "Generic (PLEG): container finished" podID="36ed5a1e-325f-4069-b4a7-a6ff01925b57" containerID="13a9b5d472fadf5cdea4696d4ac7728177dee6225353d8db0f29becf702be0b9" exitCode=0 Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.296065 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" event={"ID":"36ed5a1e-325f-4069-b4a7-a6ff01925b57","Type":"ContainerDied","Data":"13a9b5d472fadf5cdea4696d4ac7728177dee6225353d8db0f29becf702be0b9"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.296093 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" event={"ID":"36ed5a1e-325f-4069-b4a7-a6ff01925b57","Type":"ContainerStarted","Data":"5252da2cd787332c8df6875355a0e58b3960d979026abd6f25ee870ad8c0cb78"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.297762 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4b281b80-3b3a-4c04-a904-669d66ec4a74","Type":"ContainerStarted","Data":"6d47987015641476af46b407c53d82480cf7413470dbeacbb581638b4b7b7578"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.300003 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31864768-e1b4-438d-b88d-a5a8f9e89e5e","Type":"ContainerStarted","Data":"245383bfa870055baf8f90b68a857f5f6778d5ae973af216af56e1d44f1a4a94"} Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.300114 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 03 04:33:32 crc kubenswrapper[4865]: E0103 04:33:32.301255 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" Jan 03 04:33:32 crc kubenswrapper[4865]: I0103 04:33:32.338237 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.388126549 podStartE2EDuration="32.338218084s" podCreationTimestamp="2026-01-03 04:33:00 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.084852709 +0000 UTC m=+1017.201905884" lastFinishedPulling="2026-01-03 04:33:30.034944114 +0000 UTC m=+1037.151997419" observedRunningTime="2026-01-03 04:33:32.334555254 +0000 UTC m=+1039.451608429" watchObservedRunningTime="2026-01-03 04:33:32.338218084 +0000 UTC m=+1039.455271269" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.309456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerStarted","Data":"0f49289493175c6b66587e9f00a161681c04ea251dabfe0a77f61cbf5a9a8e38"} Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.312971 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerStarted","Data":"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c"} Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.646884 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.743837 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc\") pod \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.743893 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb\") pod \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.744030 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config\") pod \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.744102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhgnc\" (UniqueName: \"kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc\") pod \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\" (UID: \"36ed5a1e-325f-4069-b4a7-a6ff01925b57\") " Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.750280 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc" (OuterVolumeSpecName: "kube-api-access-hhgnc") pod "36ed5a1e-325f-4069-b4a7-a6ff01925b57" (UID: "36ed5a1e-325f-4069-b4a7-a6ff01925b57"). InnerVolumeSpecName "kube-api-access-hhgnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.763817 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36ed5a1e-325f-4069-b4a7-a6ff01925b57" (UID: "36ed5a1e-325f-4069-b4a7-a6ff01925b57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.766055 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config" (OuterVolumeSpecName: "config") pod "36ed5a1e-325f-4069-b4a7-a6ff01925b57" (UID: "36ed5a1e-325f-4069-b4a7-a6ff01925b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.779931 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36ed5a1e-325f-4069-b4a7-a6ff01925b57" (UID: "36ed5a1e-325f-4069-b4a7-a6ff01925b57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.849262 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.849301 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhgnc\" (UniqueName: \"kubernetes.io/projected/36ed5a1e-325f-4069-b4a7-a6ff01925b57-kube-api-access-hhgnc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.849312 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:33 crc kubenswrapper[4865]: I0103 04:33:33.849320 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36ed5a1e-325f-4069-b4a7-a6ff01925b57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.320269 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" event={"ID":"36ed5a1e-325f-4069-b4a7-a6ff01925b57","Type":"ContainerDied","Data":"5252da2cd787332c8df6875355a0e58b3960d979026abd6f25ee870ad8c0cb78"} Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.320814 4865 scope.go:117] "RemoveContainer" containerID="13a9b5d472fadf5cdea4696d4ac7728177dee6225353d8db0f29becf702be0b9" Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.320343 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-vtk25" Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.323591 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" event={"ID":"0c3a1c46-1e23-4ba2-a026-c712d9599436","Type":"ContainerStarted","Data":"a7ce102b345cbc877f666f0413b0ec7ec0d79f1cc0101199d7d9b6c1d3a6b927"} Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.353178 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" podStartSLOduration=15.353157428 podStartE2EDuration="15.353157428s" podCreationTimestamp="2026-01-03 04:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:33:34.3465814 +0000 UTC m=+1041.463634605" watchObservedRunningTime="2026-01-03 04:33:34.353157428 +0000 UTC m=+1041.470210623" Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.393077 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.400019 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-vtk25"] Jan 03 04:33:34 crc kubenswrapper[4865]: E0103 04:33:34.426064 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="c9d25819-b14f-411e-a158-b9f315cf13d6" Jan 03 04:33:34 crc kubenswrapper[4865]: I0103 04:33:34.444728 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.172746 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ed5a1e-325f-4069-b4a7-a6ff01925b57" path="/var/lib/kubelet/pods/36ed5a1e-325f-4069-b4a7-a6ff01925b57/volumes" Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.335426 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4b281b80-3b3a-4c04-a904-669d66ec4a74","Type":"ContainerStarted","Data":"213c7466d8812a9b4bd4c8ba6029672a5af6767d3ca0073d29fd4c7700ee7010"} Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.337687 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qv9sm" event={"ID":"c7da9b69-55d5-43a2-8e3c-2a25ca513ce6","Type":"ContainerStarted","Data":"2c331aeec84b2792cf2025426813ece67ae6f8f6d43d37c9743b2e2fb06bf16e"} Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.339803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9d25819-b14f-411e-a158-b9f315cf13d6","Type":"ContainerStarted","Data":"0e850b35de366959fc4457805d40fad4106db7e7d605ee309e9a03596b87239b"} Jan 03 04:33:35 crc kubenswrapper[4865]: E0103 04:33:35.341844 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c9d25819-b14f-411e-a158-b9f315cf13d6" Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.362779 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.61384202 podStartE2EDuration="30.362757578s" podCreationTimestamp="2026-01-03 04:33:05 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.289230737 +0000 UTC m=+1017.406283922" lastFinishedPulling="2026-01-03 04:33:34.038146295 +0000 UTC m=+1041.155199480" observedRunningTime="2026-01-03 04:33:35.358196414 +0000 UTC m=+1042.475249639" watchObservedRunningTime="2026-01-03 04:33:35.362757578 +0000 UTC m=+1042.479810803" Jan 03 04:33:35 crc kubenswrapper[4865]: I0103 04:33:35.405949 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qv9sm" podStartSLOduration=15.018570475 podStartE2EDuration="17.405926158s" podCreationTimestamp="2026-01-03 04:33:18 +0000 UTC" firstStartedPulling="2026-01-03 04:33:31.627005747 +0000 UTC m=+1038.744058932" lastFinishedPulling="2026-01-03 04:33:34.01436143 +0000 UTC m=+1041.131414615" observedRunningTime="2026-01-03 04:33:35.401624071 +0000 UTC m=+1042.518677266" watchObservedRunningTime="2026-01-03 04:33:35.405926158 +0000 UTC m=+1042.522979353" Jan 03 04:33:36 crc kubenswrapper[4865]: E0103 04:33:36.355816 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c9d25819-b14f-411e-a158-b9f315cf13d6" Jan 03 04:33:36 crc kubenswrapper[4865]: I0103 04:33:36.934253 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:36 crc kubenswrapper[4865]: I0103 04:33:36.934315 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:37 crc kubenswrapper[4865]: I0103 04:33:37.008981 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:37 crc kubenswrapper[4865]: I0103 04:33:37.471367 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 03 04:33:39 crc kubenswrapper[4865]: I0103 04:33:39.447070 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:39 crc kubenswrapper[4865]: I0103 04:33:39.517136 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:33:39 crc kubenswrapper[4865]: I0103 04:33:39.517724 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="dnsmasq-dns" containerID="cri-o://da4a2dbe946df3922e014729b9aefcc572e1961cd52437f2674cc832140daac6" gracePeriod=10 Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.388190 4865 generic.go:334] "Generic (PLEG): container finished" podID="6f490453-952c-44f5-b3d1-0b7145123630" containerID="da4a2dbe946df3922e014729b9aefcc572e1961cd52437f2674cc832140daac6" exitCode=0 Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.388301 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" event={"ID":"6f490453-952c-44f5-b3d1-0b7145123630","Type":"ContainerDied","Data":"da4a2dbe946df3922e014729b9aefcc572e1961cd52437f2674cc832140daac6"} Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.536324 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.740159 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.740219 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.740272 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.740966 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.741034 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591" gracePeriod=600 Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.848199 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.976223 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc\") pod \"6f490453-952c-44f5-b3d1-0b7145123630\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.976571 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfjws\" (UniqueName: \"kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws\") pod \"6f490453-952c-44f5-b3d1-0b7145123630\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.976677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config\") pod \"6f490453-952c-44f5-b3d1-0b7145123630\" (UID: \"6f490453-952c-44f5-b3d1-0b7145123630\") " Jan 03 04:33:40 crc kubenswrapper[4865]: I0103 04:33:40.981318 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws" (OuterVolumeSpecName: "kube-api-access-tfjws") pod "6f490453-952c-44f5-b3d1-0b7145123630" (UID: "6f490453-952c-44f5-b3d1-0b7145123630"). InnerVolumeSpecName "kube-api-access-tfjws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.007497 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config" (OuterVolumeSpecName: "config") pod "6f490453-952c-44f5-b3d1-0b7145123630" (UID: "6f490453-952c-44f5-b3d1-0b7145123630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.013235 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f490453-952c-44f5-b3d1-0b7145123630" (UID: "6f490453-952c-44f5-b3d1-0b7145123630"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.078556 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.078589 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfjws\" (UniqueName: \"kubernetes.io/projected/6f490453-952c-44f5-b3d1-0b7145123630-kube-api-access-tfjws\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.078600 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f490453-952c-44f5-b3d1-0b7145123630-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.397589 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591" exitCode=0 Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.397671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591"} Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.397980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156"} Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.398002 4865 scope.go:117] "RemoveContainer" containerID="c20a0e7659d3d063fdf492b3db209de2d28bdf1740f3632846fd9860f5536eb8" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.401735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" event={"ID":"6f490453-952c-44f5-b3d1-0b7145123630","Type":"ContainerDied","Data":"a05a244fe1275070378d4864b989de476dc0e600b88d0cae29bba69a614c0f4f"} Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.401767 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-29qbb" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.421900 4865 scope.go:117] "RemoveContainer" containerID="da4a2dbe946df3922e014729b9aefcc572e1961cd52437f2674cc832140daac6" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.439483 4865 scope.go:117] "RemoveContainer" containerID="afb9eb00f849d9a9d07db69bd94da92b987e385b60dddd5fd464f2dd18df92da" Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.446040 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:33:41 crc kubenswrapper[4865]: I0103 04:33:41.451370 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-29qbb"] Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.414735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gpwwp" event={"ID":"334ea42d-9265-43f9-8c4c-fdf516746069","Type":"ContainerStarted","Data":"4c503ee9c0ecb1cf85fcbb60a84031d736a87851797bb173ad63250a05c56d6a"} Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.415072 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gpwwp" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.444129 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gpwwp" podStartSLOduration=5.939130371 podStartE2EDuration="37.444102831s" podCreationTimestamp="2026-01-03 04:33:05 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.084582372 +0000 UTC m=+1017.201635557" lastFinishedPulling="2026-01-03 04:33:41.589554832 +0000 UTC m=+1048.706608017" observedRunningTime="2026-01-03 04:33:42.443758302 +0000 UTC m=+1049.560811507" watchObservedRunningTime="2026-01-03 04:33:42.444102831 +0000 UTC m=+1049.561156046" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.950616 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:33:42 crc kubenswrapper[4865]: E0103 04:33:42.951253 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951265 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: E0103 04:33:42.951279 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ed5a1e-325f-4069-b4a7-a6ff01925b57" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951285 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed5a1e-325f-4069-b4a7-a6ff01925b57" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: E0103 04:33:42.951293 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951299 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: E0103 04:33:42.951315 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951320 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: E0103 04:33:42.951329 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951335 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951510 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ed5a1e-325f-4069-b4a7-a6ff01925b57" containerName="init" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951524 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f490453-952c-44f5-b3d1-0b7145123630" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.951533 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5041ca65-7cb4-42a2-aec5-fa278b0d6a7c" containerName="dnsmasq-dns" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.952278 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:42 crc kubenswrapper[4865]: I0103 04:33:42.958940 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.117223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7b7\" (UniqueName: \"kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.117517 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.117624 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.117756 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.117852 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.188272 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f490453-952c-44f5-b3d1-0b7145123630" path="/var/lib/kubelet/pods/6f490453-952c-44f5-b3d1-0b7145123630/volumes" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.219575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.219656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.219692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.219710 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7b7\" (UniqueName: \"kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.219757 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.220633 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.221180 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.221316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.221719 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.253668 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7b7\" (UniqueName: \"kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7\") pod \"dnsmasq-dns-698758b865-mfjm9\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.299608 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:43 crc kubenswrapper[4865]: W0103 04:33:43.787322 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d764b4_196f_468f_b4e4_8a3f9f6f206c.slice/crio-7b3f74294e77f3bf3abf1131955f529fdb3351cd414f8efc83730747ee857e04 WatchSource:0}: Error finding container 7b3f74294e77f3bf3abf1131955f529fdb3351cd414f8efc83730747ee857e04: Status 404 returned error can't find the container with id 7b3f74294e77f3bf3abf1131955f529fdb3351cd414f8efc83730747ee857e04 Jan 03 04:33:43 crc kubenswrapper[4865]: I0103 04:33:43.798707 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.022133 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.028223 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.032251 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.032279 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.032264 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.032613 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8cxhr" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.044675 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.134436 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-cache\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.134549 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.134585 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.134664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt47g\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-kube-api-access-bt47g\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.134695 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-lock\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.235697 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.235758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.235835 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt47g\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-kube-api-access-bt47g\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.235882 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-lock\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.235931 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-cache\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.236509 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-cache\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.238660 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.239718 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.239746 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.239798 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift podName:870f799a-79a7-40ff-9a9c-ecb096c9bfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:33:44.739778896 +0000 UTC m=+1051.856832081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift") pod "swift-storage-0" (UID: "870f799a-79a7-40ff-9a9c-ecb096c9bfcb") : configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.239965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-lock\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.279920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt47g\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-kube-api-access-bt47g\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.284621 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.322507 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-x464g"] Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.323516 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.326698 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.326949 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.327078 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.379922 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x464g"] Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.446566 4865 generic.go:334] "Generic (PLEG): container finished" podID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerID="c96e93aaf2a4c21e5bf43c358e0ce7801f77c852d3edc7050da4c0ab7e291b76" exitCode=0 Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.446606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mfjm9" event={"ID":"50d764b4-196f-468f-b4e4-8a3f9f6f206c","Type":"ContainerDied","Data":"c96e93aaf2a4c21e5bf43c358e0ce7801f77c852d3edc7050da4c0ab7e291b76"} Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.446629 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mfjm9" event={"ID":"50d764b4-196f-468f-b4e4-8a3f9f6f206c","Type":"ContainerStarted","Data":"7b3f74294e77f3bf3abf1131955f529fdb3351cd414f8efc83730747ee857e04"} Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447446 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447521 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptg8l\" (UniqueName: \"kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447699 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447755 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447783 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.447853 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549679 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549771 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549822 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549858 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549912 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptg8l\" (UniqueName: \"kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.549938 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.551574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.551990 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.554460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.556250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.559185 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.569093 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.574641 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptg8l\" (UniqueName: \"kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l\") pod \"swift-ring-rebalance-x464g\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.753119 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.753322 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.753342 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: E0103 04:33:44.753403 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift podName:870f799a-79a7-40ff-9a9c-ecb096c9bfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:33:45.75336803 +0000 UTC m=+1052.870421215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift") pod "swift-storage-0" (UID: "870f799a-79a7-40ff-9a9c-ecb096c9bfcb") : configmap "swift-ring-files" not found Jan 03 04:33:44 crc kubenswrapper[4865]: I0103 04:33:44.818784 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.247061 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x464g"] Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.456080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2eba40ea-25a4-4887-aa6c-7feb32b91491","Type":"ContainerStarted","Data":"55260c6d0dbe1a00b04a065cfc376f5a3052d29dad5bc9245bce817bb8938b53"} Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.458048 4865 generic.go:334] "Generic (PLEG): container finished" podID="5dc49e44-6dba-457d-b535-41a724d9640f" containerID="b4126fba24f5dc7a1b8e2c9d677a441b9acd465aefd8a1758a1594235b9ed3ec" exitCode=0 Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.458088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5hbf4" event={"ID":"5dc49e44-6dba-457d-b535-41a724d9640f","Type":"ContainerDied","Data":"b4126fba24f5dc7a1b8e2c9d677a441b9acd465aefd8a1758a1594235b9ed3ec"} Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.460083 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mfjm9" event={"ID":"50d764b4-196f-468f-b4e4-8a3f9f6f206c","Type":"ContainerStarted","Data":"bcdec8602bbecd70c089f837370a7125b450ccf486e97186c06ca218c863619d"} Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.460310 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.461629 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"20843cac-0ba6-4f8f-b767-dd61fdb4f160","Type":"ContainerStarted","Data":"90d7ef6f70c70e2dbe52577e2137344c2026c21759c43629947ef78ae288c597"} Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.463643 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x464g" event={"ID":"6baae859-c56d-42e9-a3da-1e883afc3047","Type":"ContainerStarted","Data":"00e9ba7d003180ad5c519322904ccf69db2ed031731ffc35105c09eb37888922"} Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.523941 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-mfjm9" podStartSLOduration=3.523917043 podStartE2EDuration="3.523917043s" podCreationTimestamp="2026-01-03 04:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:33:45.515636089 +0000 UTC m=+1052.632689324" watchObservedRunningTime="2026-01-03 04:33:45.523917043 +0000 UTC m=+1052.640970268" Jan 03 04:33:45 crc kubenswrapper[4865]: I0103 04:33:45.778562 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:45 crc kubenswrapper[4865]: E0103 04:33:45.778831 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 04:33:45 crc kubenswrapper[4865]: E0103 04:33:45.779090 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 04:33:45 crc kubenswrapper[4865]: E0103 04:33:45.779186 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift podName:870f799a-79a7-40ff-9a9c-ecb096c9bfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:33:47.779165438 +0000 UTC m=+1054.896218643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift") pod "swift-storage-0" (UID: "870f799a-79a7-40ff-9a9c-ecb096c9bfcb") : configmap "swift-ring-files" not found Jan 03 04:33:46 crc kubenswrapper[4865]: I0103 04:33:46.473631 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5hbf4" event={"ID":"5dc49e44-6dba-457d-b535-41a724d9640f","Type":"ContainerStarted","Data":"e22e24ff4ede3e1631c10bad588ddfa90c361123d8a9f476605dd0f5c19576ab"} Jan 03 04:33:46 crc kubenswrapper[4865]: I0103 04:33:46.473958 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5hbf4" event={"ID":"5dc49e44-6dba-457d-b535-41a724d9640f","Type":"ContainerStarted","Data":"a001ad8c00956d2ca4f5b04eba7a3b68aeb37f6a66948fc0b989d93c51b02657"} Jan 03 04:33:46 crc kubenswrapper[4865]: I0103 04:33:46.474110 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:46 crc kubenswrapper[4865]: I0103 04:33:46.504559 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5hbf4" podStartSLOduration=7.394023769 podStartE2EDuration="41.504538318s" podCreationTimestamp="2026-01-03 04:33:05 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.216714846 +0000 UTC m=+1017.333768031" lastFinishedPulling="2026-01-03 04:33:44.327229395 +0000 UTC m=+1051.444282580" observedRunningTime="2026-01-03 04:33:46.497625431 +0000 UTC m=+1053.614678636" watchObservedRunningTime="2026-01-03 04:33:46.504538318 +0000 UTC m=+1053.621591503" Jan 03 04:33:47 crc kubenswrapper[4865]: I0103 04:33:47.481764 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:33:47 crc kubenswrapper[4865]: I0103 04:33:47.811674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:47 crc kubenswrapper[4865]: E0103 04:33:47.811842 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 04:33:47 crc kubenswrapper[4865]: E0103 04:33:47.811855 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 04:33:47 crc kubenswrapper[4865]: E0103 04:33:47.811893 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift podName:870f799a-79a7-40ff-9a9c-ecb096c9bfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:33:51.811880814 +0000 UTC m=+1058.928933999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift") pod "swift-storage-0" (UID: "870f799a-79a7-40ff-9a9c-ecb096c9bfcb") : configmap "swift-ring-files" not found Jan 03 04:33:48 crc kubenswrapper[4865]: I0103 04:33:48.491417 4865 generic.go:334] "Generic (PLEG): container finished" podID="2eba40ea-25a4-4887-aa6c-7feb32b91491" containerID="55260c6d0dbe1a00b04a065cfc376f5a3052d29dad5bc9245bce817bb8938b53" exitCode=0 Jan 03 04:33:48 crc kubenswrapper[4865]: I0103 04:33:48.491558 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2eba40ea-25a4-4887-aa6c-7feb32b91491","Type":"ContainerDied","Data":"55260c6d0dbe1a00b04a065cfc376f5a3052d29dad5bc9245bce817bb8938b53"} Jan 03 04:33:50 crc kubenswrapper[4865]: I0103 04:33:50.517163 4865 generic.go:334] "Generic (PLEG): container finished" podID="20843cac-0ba6-4f8f-b767-dd61fdb4f160" containerID="90d7ef6f70c70e2dbe52577e2137344c2026c21759c43629947ef78ae288c597" exitCode=0 Jan 03 04:33:50 crc kubenswrapper[4865]: I0103 04:33:50.517221 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"20843cac-0ba6-4f8f-b767-dd61fdb4f160","Type":"ContainerDied","Data":"90d7ef6f70c70e2dbe52577e2137344c2026c21759c43629947ef78ae288c597"} Jan 03 04:33:51 crc kubenswrapper[4865]: I0103 04:33:51.884730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:51 crc kubenswrapper[4865]: E0103 04:33:51.885050 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 03 04:33:51 crc kubenswrapper[4865]: E0103 04:33:51.885613 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 03 04:33:51 crc kubenswrapper[4865]: E0103 04:33:51.885720 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift podName:870f799a-79a7-40ff-9a9c-ecb096c9bfcb nodeName:}" failed. No retries permitted until 2026-01-03 04:33:59.885680074 +0000 UTC m=+1067.002733259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift") pod "swift-storage-0" (UID: "870f799a-79a7-40ff-9a9c-ecb096c9bfcb") : configmap "swift-ring-files" not found Jan 03 04:33:52 crc kubenswrapper[4865]: I0103 04:33:52.547735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"20843cac-0ba6-4f8f-b767-dd61fdb4f160","Type":"ContainerStarted","Data":"6f6f555da2e0a0a6fd9e440c8792cc0a7e117e6d573d3849d84cc61122ba5c07"} Jan 03 04:33:52 crc kubenswrapper[4865]: I0103 04:33:52.560823 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2eba40ea-25a4-4887-aa6c-7feb32b91491","Type":"ContainerStarted","Data":"ce98e9ad25a9b9ac71912ce813a27b2a7f4681685563a02ed5adb60a23f45efe"} Jan 03 04:33:52 crc kubenswrapper[4865]: I0103 04:33:52.575095 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371982.279701 podStartE2EDuration="54.575075249s" podCreationTimestamp="2026-01-03 04:32:58 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.134641638 +0000 UTC m=+1017.251694823" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:33:52.57473313 +0000 UTC m=+1059.691786315" watchObservedRunningTime="2026-01-03 04:33:52.575075249 +0000 UTC m=+1059.692128434" Jan 03 04:33:52 crc kubenswrapper[4865]: I0103 04:33:52.598174 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.919503915 podStartE2EDuration="55.598148704s" podCreationTimestamp="2026-01-03 04:32:57 +0000 UTC" firstStartedPulling="2026-01-03 04:33:09.672952757 +0000 UTC m=+1016.790005942" lastFinishedPulling="2026-01-03 04:33:44.351597556 +0000 UTC m=+1051.468650731" observedRunningTime="2026-01-03 04:33:52.592510771 +0000 UTC m=+1059.709563956" watchObservedRunningTime="2026-01-03 04:33:52.598148704 +0000 UTC m=+1059.715201889" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.301639 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.384577 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.384927 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="dnsmasq-dns" containerID="cri-o://a7ce102b345cbc877f666f0413b0ec7ec0d79f1cc0101199d7d9b6c1d3a6b927" gracePeriod=10 Jan 03 04:33:53 crc kubenswrapper[4865]: E0103 04:33:53.555142 4865 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:35758->38.102.83.196:38213: write tcp 38.102.83.196:35758->38.102.83.196:38213: write: broken pipe Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.571065 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x464g" event={"ID":"6baae859-c56d-42e9-a3da-1e883afc3047","Type":"ContainerStarted","Data":"d83185e2adfc2d1ca98f6bdc1e735ba4e4ea5beeedb065a7ca179482e83df3bb"} Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.575885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9d25819-b14f-411e-a158-b9f315cf13d6","Type":"ContainerStarted","Data":"35fce771927d8f1110ba4d36cbd48a1c254a90958a406d9ec02a6a5d03d42995"} Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.584551 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c06d4fd-0e97-401a-a450-92e7e1c22131","Type":"ContainerStarted","Data":"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398"} Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.584802 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.587181 4865 generic.go:334] "Generic (PLEG): container finished" podID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerID="a7ce102b345cbc877f666f0413b0ec7ec0d79f1cc0101199d7d9b6c1d3a6b927" exitCode=0 Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.587222 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" event={"ID":"0c3a1c46-1e23-4ba2-a026-c712d9599436","Type":"ContainerDied","Data":"a7ce102b345cbc877f666f0413b0ec7ec0d79f1cc0101199d7d9b6c1d3a6b927"} Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.594018 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-x464g" podStartSLOduration=2.706017777 podStartE2EDuration="9.594001342s" podCreationTimestamp="2026-01-03 04:33:44 +0000 UTC" firstStartedPulling="2026-01-03 04:33:45.262263775 +0000 UTC m=+1052.379316960" lastFinishedPulling="2026-01-03 04:33:52.15024732 +0000 UTC m=+1059.267300525" observedRunningTime="2026-01-03 04:33:53.587913526 +0000 UTC m=+1060.704966711" watchObservedRunningTime="2026-01-03 04:33:53.594001342 +0000 UTC m=+1060.711054547" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.619473 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.60244811 podStartE2EDuration="51.619454221s" podCreationTimestamp="2026-01-03 04:33:02 +0000 UTC" firstStartedPulling="2026-01-03 04:33:10.134665448 +0000 UTC m=+1017.251718643" lastFinishedPulling="2026-01-03 04:33:52.151671569 +0000 UTC m=+1059.268724754" observedRunningTime="2026-01-03 04:33:53.611181637 +0000 UTC m=+1060.728234822" watchObservedRunningTime="2026-01-03 04:33:53.619454221 +0000 UTC m=+1060.736507396" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.640091 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.3596463100000005 podStartE2EDuration="45.64007619s" podCreationTimestamp="2026-01-03 04:33:08 +0000 UTC" firstStartedPulling="2026-01-03 04:33:13.870193561 +0000 UTC m=+1020.987246746" lastFinishedPulling="2026-01-03 04:33:52.150623431 +0000 UTC m=+1059.267676626" observedRunningTime="2026-01-03 04:33:53.635895777 +0000 UTC m=+1060.752948962" watchObservedRunningTime="2026-01-03 04:33:53.64007619 +0000 UTC m=+1060.757129375" Jan 03 04:33:53 crc kubenswrapper[4865]: I0103 04:33:53.900152 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.036801 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb\") pod \"0c3a1c46-1e23-4ba2-a026-c712d9599436\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.036905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb\") pod \"0c3a1c46-1e23-4ba2-a026-c712d9599436\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.036993 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcnjg\" (UniqueName: \"kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg\") pod \"0c3a1c46-1e23-4ba2-a026-c712d9599436\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.037024 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config\") pod \"0c3a1c46-1e23-4ba2-a026-c712d9599436\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.037092 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc\") pod \"0c3a1c46-1e23-4ba2-a026-c712d9599436\" (UID: \"0c3a1c46-1e23-4ba2-a026-c712d9599436\") " Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.042528 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg" (OuterVolumeSpecName: "kube-api-access-wcnjg") pod "0c3a1c46-1e23-4ba2-a026-c712d9599436" (UID: "0c3a1c46-1e23-4ba2-a026-c712d9599436"). InnerVolumeSpecName "kube-api-access-wcnjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.070276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c3a1c46-1e23-4ba2-a026-c712d9599436" (UID: "0c3a1c46-1e23-4ba2-a026-c712d9599436"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.081364 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c3a1c46-1e23-4ba2-a026-c712d9599436" (UID: "0c3a1c46-1e23-4ba2-a026-c712d9599436"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.083961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c3a1c46-1e23-4ba2-a026-c712d9599436" (UID: "0c3a1c46-1e23-4ba2-a026-c712d9599436"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.092223 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config" (OuterVolumeSpecName: "config") pod "0c3a1c46-1e23-4ba2-a026-c712d9599436" (UID: "0c3a1c46-1e23-4ba2-a026-c712d9599436"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.139178 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.139208 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcnjg\" (UniqueName: \"kubernetes.io/projected/0c3a1c46-1e23-4ba2-a026-c712d9599436-kube-api-access-wcnjg\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.139221 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.139230 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.139251 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3a1c46-1e23-4ba2-a026-c712d9599436-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.597731 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.597789 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nm5cw" event={"ID":"0c3a1c46-1e23-4ba2-a026-c712d9599436","Type":"ContainerDied","Data":"9fc6e7241182fc806c409912e2db0ea3c21321b06214f1213be4efaba3557eb8"} Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.597832 4865 scope.go:117] "RemoveContainer" containerID="a7ce102b345cbc877f666f0413b0ec7ec0d79f1cc0101199d7d9b6c1d3a6b927" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.614028 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.615244 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.635899 4865 scope.go:117] "RemoveContainer" containerID="040dda61a21c0b8cbad4154ae28d5b2c43136205e6a3da1f50366d92f8b39ded" Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.651079 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:54 crc kubenswrapper[4865]: I0103 04:33:54.660799 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nm5cw"] Jan 03 04:33:55 crc kubenswrapper[4865]: I0103 04:33:55.181618 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" path="/var/lib/kubelet/pods/0c3a1c46-1e23-4ba2-a026-c712d9599436/volumes" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.651555 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.695730 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.946587 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 03 04:33:57 crc kubenswrapper[4865]: E0103 04:33:57.946871 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="init" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.946890 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="init" Jan 03 04:33:57 crc kubenswrapper[4865]: E0103 04:33:57.946934 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="dnsmasq-dns" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.946941 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="dnsmasq-dns" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.947110 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3a1c46-1e23-4ba2-a026-c712d9599436" containerName="dnsmasq-dns" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.947943 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.951735 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.951818 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.951887 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.956682 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v5qt5" Jan 03 04:33:57 crc kubenswrapper[4865]: I0103 04:33:57.958084 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136346 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-config\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136414 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136657 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sk6q\" (UniqueName: \"kubernetes.io/projected/5bec493c-4a1f-49db-b9f3-d05bffd3541b-kube-api-access-4sk6q\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136773 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.136912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-scripts\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238070 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-config\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238163 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238236 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sk6q\" (UniqueName: \"kubernetes.io/projected/5bec493c-4a1f-49db-b9f3-d05bffd3541b-kube-api-access-4sk6q\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238280 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238334 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-scripts\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.238639 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.239203 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-config\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.239280 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bec493c-4a1f-49db-b9f3-d05bffd3541b-scripts\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.245195 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.248545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.250018 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bec493c-4a1f-49db-b9f3-d05bffd3541b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.261922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sk6q\" (UniqueName: \"kubernetes.io/projected/5bec493c-4a1f-49db-b9f3-d05bffd3541b-kube-api-access-4sk6q\") pod \"ovn-northd-0\" (UID: \"5bec493c-4a1f-49db-b9f3-d05bffd3541b\") " pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.273621 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.726062 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.781071 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.781142 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 03 04:33:58 crc kubenswrapper[4865]: I0103 04:33:58.989933 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 03 04:33:59 crc kubenswrapper[4865]: I0103 04:33:59.679086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5bec493c-4a1f-49db-b9f3-d05bffd3541b","Type":"ContainerStarted","Data":"4b32fb08b195bbc8ac5f0e0e5892dd306cf73a93a19c63cff21c44bf7ae4ecc4"} Jan 03 04:33:59 crc kubenswrapper[4865]: I0103 04:33:59.745578 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 03 04:33:59 crc kubenswrapper[4865]: I0103 04:33:59.973754 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:33:59 crc kubenswrapper[4865]: I0103 04:33:59.995694 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/870f799a-79a7-40ff-9a9c-ecb096c9bfcb-etc-swift\") pod \"swift-storage-0\" (UID: \"870f799a-79a7-40ff-9a9c-ecb096c9bfcb\") " pod="openstack/swift-storage-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.147986 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.148374 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.227891 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cd23-account-create-update-xkg9s"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.229429 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.233274 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.236709 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd23-account-create-update-xkg9s"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.246911 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.274237 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s7bzd"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.275805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.292926 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.300774 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7bzd"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.383875 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxkw\" (UniqueName: \"kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.384078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfkh\" (UniqueName: \"kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.384129 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.384188 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.469164 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-m62nb"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.475461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m62nb"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.475542 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.485598 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfkh\" (UniqueName: \"kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.485677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.485742 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.485777 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxkw\" (UniqueName: \"kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.487533 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.488024 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.501851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfkh\" (UniqueName: \"kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh\") pod \"keystone-db-create-s7bzd\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.501865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxkw\" (UniqueName: \"kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw\") pod \"keystone-cd23-account-create-update-xkg9s\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.570659 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-22a2-account-create-update-8gxc5"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.571923 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.574026 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.574302 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.577205 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22a2-account-create-update-8gxc5"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.588329 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmld\" (UniqueName: \"kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.588429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.601510 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.688349 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5bec493c-4a1f-49db-b9f3-d05bffd3541b","Type":"ContainerStarted","Data":"06afd5bb82774a8e8c178d909bfc4bea3d9499b72fffc1e03afb0485cb58ec6c"} Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.688406 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5bec493c-4a1f-49db-b9f3-d05bffd3541b","Type":"ContainerStarted","Data":"e31c122a2316d122aed310f4d6bea52164095dcca6a37b0117732a8042630632"} Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.688470 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.690517 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmld\" (UniqueName: \"kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.690590 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.690612 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kqz\" (UniqueName: \"kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.690650 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.692240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.697768 4865 generic.go:334] "Generic (PLEG): container finished" podID="6baae859-c56d-42e9-a3da-1e883afc3047" containerID="d83185e2adfc2d1ca98f6bdc1e735ba4e4ea5beeedb065a7ca179482e83df3bb" exitCode=0 Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.697828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x464g" event={"ID":"6baae859-c56d-42e9-a3da-1e883afc3047","Type":"ContainerDied","Data":"d83185e2adfc2d1ca98f6bdc1e735ba4e4ea5beeedb065a7ca179482e83df3bb"} Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.740392 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.384385583 podStartE2EDuration="3.740352607s" podCreationTimestamp="2026-01-03 04:33:57 +0000 UTC" firstStartedPulling="2026-01-03 04:33:58.729214834 +0000 UTC m=+1065.846268019" lastFinishedPulling="2026-01-03 04:34:00.085181818 +0000 UTC m=+1067.202235043" observedRunningTime="2026-01-03 04:34:00.717867388 +0000 UTC m=+1067.834920573" watchObservedRunningTime="2026-01-03 04:34:00.740352607 +0000 UTC m=+1067.857405792" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.766677 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmld\" (UniqueName: \"kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld\") pod \"placement-db-create-m62nb\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.778166 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m26q5"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.779281 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m26q5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.788315 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m26q5"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.794392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kqz\" (UniqueName: \"kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.794462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.799845 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.806668 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m62nb" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.814426 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kqz\" (UniqueName: \"kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz\") pod \"placement-22a2-account-create-update-8gxc5\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.832115 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.891077 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.891664 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-61cb-account-create-update-5r7hx"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.892609 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.894980 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.896310 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgv5\" (UniqueName: \"kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.896405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.899337 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-61cb-account-create-update-5r7hx"] Jan 03 04:34:00 crc kubenswrapper[4865]: I0103 04:34:00.908525 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.002268 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.002620 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgv5\" (UniqueName: \"kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.002659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnjf\" (UniqueName: \"kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.002689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.002997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.026593 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgv5\" (UniqueName: \"kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5\") pod \"glance-db-create-m26q5\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " pod="openstack/glance-db-create-m26q5" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.094414 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m26q5" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.101963 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd23-account-create-update-xkg9s"] Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.113678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnjf\" (UniqueName: \"kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.113748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.114776 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: W0103 04:34:01.124769 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d77c8a_eaac_4c29_8006_66c3882e909f.slice/crio-f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91 WatchSource:0}: Error finding container f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91: Status 404 returned error can't find the container with id f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.140925 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnjf\" (UniqueName: \"kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf\") pod \"glance-61cb-account-create-update-5r7hx\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.218245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.228488 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7bzd"] Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.337204 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m62nb"] Jan 03 04:34:01 crc kubenswrapper[4865]: W0103 04:34:01.355185 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode945d4e3_9ccb_449d_880c_3ef6ea90048c.slice/crio-9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77 WatchSource:0}: Error finding container 9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77: Status 404 returned error can't find the container with id 9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.446816 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22a2-account-create-update-8gxc5"] Jan 03 04:34:01 crc kubenswrapper[4865]: W0103 04:34:01.467702 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf46abc38_3d62_4fcc_8dfe_13bbb6f9bc22.slice/crio-cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d WatchSource:0}: Error finding container cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d: Status 404 returned error can't find the container with id cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.575676 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m26q5"] Jan 03 04:34:01 crc kubenswrapper[4865]: W0103 04:34:01.582086 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94c0b5b3_662d_4b55_a743_0e8652bd72b3.slice/crio-de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6 WatchSource:0}: Error finding container de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6: Status 404 returned error can't find the container with id de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.675918 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-61cb-account-create-update-5r7hx"] Jan 03 04:34:01 crc kubenswrapper[4865]: W0103 04:34:01.700356 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod059d7a19_c549_4eeb_bcbb_6be0e69475e6.slice/crio-c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99 WatchSource:0}: Error finding container c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99: Status 404 returned error can't find the container with id c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.708175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22a2-account-create-update-8gxc5" event={"ID":"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22","Type":"ContainerStarted","Data":"30dc16f8afb38cd17fe22c23263144ce9919da9ca4427de44bb4da8ff036ae26"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.708217 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22a2-account-create-update-8gxc5" event={"ID":"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22","Type":"ContainerStarted","Data":"cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.711100 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m26q5" event={"ID":"94c0b5b3-662d-4b55-a743-0e8652bd72b3","Type":"ContainerStarted","Data":"de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.715490 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m62nb" event={"ID":"e945d4e3-9ccb-449d-880c-3ef6ea90048c","Type":"ContainerStarted","Data":"4832be279f13117c576ccd6cf6b34bc53465d68df3c764b92c1b1d4fe611fa29"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.715528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m62nb" event={"ID":"e945d4e3-9ccb-449d-880c-3ef6ea90048c","Type":"ContainerStarted","Data":"9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.718023 4865 generic.go:334] "Generic (PLEG): container finished" podID="87250d2e-2d43-478a-9500-33cc335bca50" containerID="953da7c959bc285bd197c1a16fb16bed719f8c662ea8ab8dc4e2272298266edc" exitCode=0 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.718143 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7bzd" event={"ID":"87250d2e-2d43-478a-9500-33cc335bca50","Type":"ContainerDied","Data":"953da7c959bc285bd197c1a16fb16bed719f8c662ea8ab8dc4e2272298266edc"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.718193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7bzd" event={"ID":"87250d2e-2d43-478a-9500-33cc335bca50","Type":"ContainerStarted","Data":"afd3c1085d44d4edddfbc4bc12ad3b4f05dfb7be27c073f4b03eb818a850fd98"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.720217 4865 generic.go:334] "Generic (PLEG): container finished" podID="e4d77c8a-eaac-4c29-8006-66c3882e909f" containerID="7806a2836b7bd2dcf37c505b6e3febcaa66944edf94a65548d71d5031a035303" exitCode=0 Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.720284 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd23-account-create-update-xkg9s" event={"ID":"e4d77c8a-eaac-4c29-8006-66c3882e909f","Type":"ContainerDied","Data":"7806a2836b7bd2dcf37c505b6e3febcaa66944edf94a65548d71d5031a035303"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.720366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd23-account-create-update-xkg9s" event={"ID":"e4d77c8a-eaac-4c29-8006-66c3882e909f","Type":"ContainerStarted","Data":"f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.724481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"299ac88c0f40eb955ac21ff62611b7e5f859d718df77ddf40f4e21e2458fd115"} Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.730863 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-22a2-account-create-update-8gxc5" podStartSLOduration=1.730841089 podStartE2EDuration="1.730841089s" podCreationTimestamp="2026-01-03 04:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:01.722293227 +0000 UTC m=+1068.839346412" watchObservedRunningTime="2026-01-03 04:34:01.730841089 +0000 UTC m=+1068.847894274" Jan 03 04:34:01 crc kubenswrapper[4865]: I0103 04:34:01.756572 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-m62nb" podStartSLOduration=1.756553305 podStartE2EDuration="1.756553305s" podCreationTimestamp="2026-01-03 04:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:01.752238928 +0000 UTC m=+1068.869292133" watchObservedRunningTime="2026-01-03 04:34:01.756553305 +0000 UTC m=+1068.873606490" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.466507 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543172 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543216 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543241 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543305 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543517 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543648 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptg8l\" (UniqueName: \"kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.543673 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts\") pod \"6baae859-c56d-42e9-a3da-1e883afc3047\" (UID: \"6baae859-c56d-42e9-a3da-1e883afc3047\") " Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.545261 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.546050 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.551806 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l" (OuterVolumeSpecName: "kube-api-access-ptg8l") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "kube-api-access-ptg8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.557981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.571531 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.572665 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.572966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts" (OuterVolumeSpecName: "scripts") pod "6baae859-c56d-42e9-a3da-1e883afc3047" (UID: "6baae859-c56d-42e9-a3da-1e883afc3047"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.645918 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptg8l\" (UniqueName: \"kubernetes.io/projected/6baae859-c56d-42e9-a3da-1e883afc3047-kube-api-access-ptg8l\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646284 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646296 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646304 4865 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646313 4865 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6baae859-c56d-42e9-a3da-1e883afc3047-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646324 4865 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6baae859-c56d-42e9-a3da-1e883afc3047-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.646347 4865 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6baae859-c56d-42e9-a3da-1e883afc3047-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.740313 4865 generic.go:334] "Generic (PLEG): container finished" podID="94c0b5b3-662d-4b55-a743-0e8652bd72b3" containerID="f1efceb784003c065e4ff5445f9037f7bee1b4be7904303d0c83055685aa6030" exitCode=0 Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.740351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m26q5" event={"ID":"94c0b5b3-662d-4b55-a743-0e8652bd72b3","Type":"ContainerDied","Data":"f1efceb784003c065e4ff5445f9037f7bee1b4be7904303d0c83055685aa6030"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.741590 4865 generic.go:334] "Generic (PLEG): container finished" podID="e945d4e3-9ccb-449d-880c-3ef6ea90048c" containerID="4832be279f13117c576ccd6cf6b34bc53465d68df3c764b92c1b1d4fe611fa29" exitCode=0 Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.741833 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m62nb" event={"ID":"e945d4e3-9ccb-449d-880c-3ef6ea90048c","Type":"ContainerDied","Data":"4832be279f13117c576ccd6cf6b34bc53465d68df3c764b92c1b1d4fe611fa29"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.745783 4865 generic.go:334] "Generic (PLEG): container finished" podID="059d7a19-c549-4eeb-bcbb-6be0e69475e6" containerID="9199f235fa9d3a7c99bcdeb94581b4181ab201ca0eab0e8d19871ce819061ef3" exitCode=0 Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.745825 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-61cb-account-create-update-5r7hx" event={"ID":"059d7a19-c549-4eeb-bcbb-6be0e69475e6","Type":"ContainerDied","Data":"9199f235fa9d3a7c99bcdeb94581b4181ab201ca0eab0e8d19871ce819061ef3"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.745843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-61cb-account-create-update-5r7hx" event={"ID":"059d7a19-c549-4eeb-bcbb-6be0e69475e6","Type":"ContainerStarted","Data":"c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.749668 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"a1262c55c2cb8005423d2c1009dfc6b6593b58d38dc5c548eb1646038ada66cb"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.763238 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x464g" event={"ID":"6baae859-c56d-42e9-a3da-1e883afc3047","Type":"ContainerDied","Data":"00e9ba7d003180ad5c519322904ccf69db2ed031731ffc35105c09eb37888922"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.763282 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e9ba7d003180ad5c519322904ccf69db2ed031731ffc35105c09eb37888922" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.763345 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x464g" Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.767406 4865 generic.go:334] "Generic (PLEG): container finished" podID="f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" containerID="30dc16f8afb38cd17fe22c23263144ce9919da9ca4427de44bb4da8ff036ae26" exitCode=0 Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.767906 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22a2-account-create-update-8gxc5" event={"ID":"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22","Type":"ContainerDied","Data":"30dc16f8afb38cd17fe22c23263144ce9919da9ca4427de44bb4da8ff036ae26"} Jan 03 04:34:02 crc kubenswrapper[4865]: I0103 04:34:02.832414 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.265581 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.268919 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.363417 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnfkh\" (UniqueName: \"kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh\") pod \"87250d2e-2d43-478a-9500-33cc335bca50\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.363490 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts\") pod \"e4d77c8a-eaac-4c29-8006-66c3882e909f\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.363592 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjxkw\" (UniqueName: \"kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw\") pod \"e4d77c8a-eaac-4c29-8006-66c3882e909f\" (UID: \"e4d77c8a-eaac-4c29-8006-66c3882e909f\") " Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.363617 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts\") pod \"87250d2e-2d43-478a-9500-33cc335bca50\" (UID: \"87250d2e-2d43-478a-9500-33cc335bca50\") " Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.364054 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87250d2e-2d43-478a-9500-33cc335bca50" (UID: "87250d2e-2d43-478a-9500-33cc335bca50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.364075 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d77c8a-eaac-4c29-8006-66c3882e909f" (UID: "e4d77c8a-eaac-4c29-8006-66c3882e909f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.364447 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87250d2e-2d43-478a-9500-33cc335bca50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.364463 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d77c8a-eaac-4c29-8006-66c3882e909f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.369782 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw" (OuterVolumeSpecName: "kube-api-access-vjxkw") pod "e4d77c8a-eaac-4c29-8006-66c3882e909f" (UID: "e4d77c8a-eaac-4c29-8006-66c3882e909f"). InnerVolumeSpecName "kube-api-access-vjxkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.380555 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh" (OuterVolumeSpecName: "kube-api-access-fnfkh") pod "87250d2e-2d43-478a-9500-33cc335bca50" (UID: "87250d2e-2d43-478a-9500-33cc335bca50"). InnerVolumeSpecName "kube-api-access-fnfkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.465860 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnfkh\" (UniqueName: \"kubernetes.io/projected/87250d2e-2d43-478a-9500-33cc335bca50-kube-api-access-fnfkh\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.465906 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjxkw\" (UniqueName: \"kubernetes.io/projected/e4d77c8a-eaac-4c29-8006-66c3882e909f-kube-api-access-vjxkw\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.780016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"15cc65a5a31479b4f3df0969e05262d20d71184869c6f91d20d1c8f9c14561d4"} Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.780076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"b44dc66b94072438c0977b3cb4afb91c76f04bc72e6e7e6125f914b97e59c8c6"} Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.780097 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"3139dc2a39019f852e47dd431ed3a50a407f3fc5aad33f1509980606aaf83d27"} Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.782538 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7bzd" event={"ID":"87250d2e-2d43-478a-9500-33cc335bca50","Type":"ContainerDied","Data":"afd3c1085d44d4edddfbc4bc12ad3b4f05dfb7be27c073f4b03eb818a850fd98"} Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.782572 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7bzd" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.782581 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd3c1085d44d4edddfbc4bc12ad3b4f05dfb7be27c073f4b03eb818a850fd98" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.785520 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd23-account-create-update-xkg9s" event={"ID":"e4d77c8a-eaac-4c29-8006-66c3882e909f","Type":"ContainerDied","Data":"f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91"} Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.785660 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55dd8efd3be14f284b6162562ef5b94dc4c06d80e565d74fc5aedb302d7eb91" Jan 03 04:34:03 crc kubenswrapper[4865]: I0103 04:34:03.785832 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd23-account-create-update-xkg9s" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.143486 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.208057 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84kqz\" (UniqueName: \"kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz\") pod \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.208156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts\") pod \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\" (UID: \"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.208703 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" (UID: "f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.230040 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz" (OuterVolumeSpecName: "kube-api-access-84kqz") pod "f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" (UID: "f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22"). InnerVolumeSpecName "kube-api-access-84kqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.312780 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84kqz\" (UniqueName: \"kubernetes.io/projected/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-kube-api-access-84kqz\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.312808 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.526205 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m62nb" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.533482 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.557243 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m26q5" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.618954 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxmld\" (UniqueName: \"kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld\") pod \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.618989 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnjf\" (UniqueName: \"kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf\") pod \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.619035 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts\") pod \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\" (UID: \"059d7a19-c549-4eeb-bcbb-6be0e69475e6\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.619100 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts\") pod \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\" (UID: \"e945d4e3-9ccb-449d-880c-3ef6ea90048c\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.619894 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e945d4e3-9ccb-449d-880c-3ef6ea90048c" (UID: "e945d4e3-9ccb-449d-880c-3ef6ea90048c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.620225 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "059d7a19-c549-4eeb-bcbb-6be0e69475e6" (UID: "059d7a19-c549-4eeb-bcbb-6be0e69475e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.623827 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld" (OuterVolumeSpecName: "kube-api-access-dxmld") pod "e945d4e3-9ccb-449d-880c-3ef6ea90048c" (UID: "e945d4e3-9ccb-449d-880c-3ef6ea90048c"). InnerVolumeSpecName "kube-api-access-dxmld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.627575 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf" (OuterVolumeSpecName: "kube-api-access-dvnjf") pod "059d7a19-c549-4eeb-bcbb-6be0e69475e6" (UID: "059d7a19-c549-4eeb-bcbb-6be0e69475e6"). InnerVolumeSpecName "kube-api-access-dvnjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.720866 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts\") pod \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721079 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgv5\" (UniqueName: \"kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5\") pod \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\" (UID: \"94c0b5b3-662d-4b55-a743-0e8652bd72b3\") " Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721522 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e945d4e3-9ccb-449d-880c-3ef6ea90048c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721539 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxmld\" (UniqueName: \"kubernetes.io/projected/e945d4e3-9ccb-449d-880c-3ef6ea90048c-kube-api-access-dxmld\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721552 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnjf\" (UniqueName: \"kubernetes.io/projected/059d7a19-c549-4eeb-bcbb-6be0e69475e6-kube-api-access-dvnjf\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721563 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/059d7a19-c549-4eeb-bcbb-6be0e69475e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.721936 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94c0b5b3-662d-4b55-a743-0e8652bd72b3" (UID: "94c0b5b3-662d-4b55-a743-0e8652bd72b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.726361 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5" (OuterVolumeSpecName: "kube-api-access-szgv5") pod "94c0b5b3-662d-4b55-a743-0e8652bd72b3" (UID: "94c0b5b3-662d-4b55-a743-0e8652bd72b3"). InnerVolumeSpecName "kube-api-access-szgv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.802452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"1f7f3f17f7544dea6aa246ebfe543469543363516aa7eb0ae6e782111c07516c"} Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.805098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22a2-account-create-update-8gxc5" event={"ID":"f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22","Type":"ContainerDied","Data":"cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d"} Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.805132 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccabe84ebf30459ec7493dbba3021e83566b820905513e97cd1b938c87f056d" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.805128 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22a2-account-create-update-8gxc5" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.806521 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m26q5" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.809839 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m26q5" event={"ID":"94c0b5b3-662d-4b55-a743-0e8652bd72b3","Type":"ContainerDied","Data":"de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6"} Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.809900 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de69efa997dac56552202cc52c99639cd75c99617940f8a76c833c4590a11be6" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.811241 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m62nb" event={"ID":"e945d4e3-9ccb-449d-880c-3ef6ea90048c","Type":"ContainerDied","Data":"9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77"} Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.811264 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1f80831ba5df8bf73a866658f9b739d693074a489b087117356a6eb1799e77" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.811264 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m62nb" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.823154 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szgv5\" (UniqueName: \"kubernetes.io/projected/94c0b5b3-662d-4b55-a743-0e8652bd72b3-kube-api-access-szgv5\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.823192 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c0b5b3-662d-4b55-a743-0e8652bd72b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.823928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-61cb-account-create-update-5r7hx" event={"ID":"059d7a19-c549-4eeb-bcbb-6be0e69475e6","Type":"ContainerDied","Data":"c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99"} Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.823969 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53bb32389a4181fda9655f334365beda14f680d16a4a38a92329afd32aeed99" Jan 03 04:34:04 crc kubenswrapper[4865]: I0103 04:34:04.824092 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-61cb-account-create-update-5r7hx" Jan 03 04:34:05 crc kubenswrapper[4865]: I0103 04:34:05.831563 4865 generic.go:334] "Generic (PLEG): container finished" podID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerID="0f49289493175c6b66587e9f00a161681c04ea251dabfe0a77f61cbf5a9a8e38" exitCode=0 Jan 03 04:34:05 crc kubenswrapper[4865]: I0103 04:34:05.831641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerDied","Data":"0f49289493175c6b66587e9f00a161681c04ea251dabfe0a77f61cbf5a9a8e38"} Jan 03 04:34:05 crc kubenswrapper[4865]: I0103 04:34:05.836899 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"6bf8f90b641a819e958293574dc19a8dc650a9e611f1a6c024f1b4cce0c2d45f"} Jan 03 04:34:05 crc kubenswrapper[4865]: I0103 04:34:05.838839 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerID="58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c" exitCode=0 Jan 03 04:34:05 crc kubenswrapper[4865]: I0103 04:34:05.838872 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerDied","Data":"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c"} Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152423 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gdpqx"] Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152769 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e945d4e3-9ccb-449d-880c-3ef6ea90048c" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152788 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e945d4e3-9ccb-449d-880c-3ef6ea90048c" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152809 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152818 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152839 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059d7a19-c549-4eeb-bcbb-6be0e69475e6" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152848 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="059d7a19-c549-4eeb-bcbb-6be0e69475e6" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152869 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87250d2e-2d43-478a-9500-33cc335bca50" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152876 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="87250d2e-2d43-478a-9500-33cc335bca50" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152888 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c0b5b3-662d-4b55-a743-0e8652bd72b3" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152896 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c0b5b3-662d-4b55-a743-0e8652bd72b3" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152908 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d77c8a-eaac-4c29-8006-66c3882e909f" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152915 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d77c8a-eaac-4c29-8006-66c3882e909f" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: E0103 04:34:06.152927 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6baae859-c56d-42e9-a3da-1e883afc3047" containerName="swift-ring-rebalance" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.152935 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6baae859-c56d-42e9-a3da-1e883afc3047" containerName="swift-ring-rebalance" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153120 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6baae859-c56d-42e9-a3da-1e883afc3047" containerName="swift-ring-rebalance" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153138 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153156 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c0b5b3-662d-4b55-a743-0e8652bd72b3" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153171 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="87250d2e-2d43-478a-9500-33cc335bca50" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153183 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e945d4e3-9ccb-449d-880c-3ef6ea90048c" containerName="mariadb-database-create" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153195 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d77c8a-eaac-4c29-8006-66c3882e909f" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.153210 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="059d7a19-c549-4eeb-bcbb-6be0e69475e6" containerName="mariadb-account-create-update" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.154167 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.159203 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fg978" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.164437 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.176025 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gdpqx"] Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.247148 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.247193 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.247226 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.247373 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjtm\" (UniqueName: \"kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.348778 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.348830 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.348859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.348904 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjtm\" (UniqueName: \"kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.354255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.360811 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.361560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.379593 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjtm\" (UniqueName: \"kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm\") pod \"glance-db-sync-gdpqx\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.468749 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gdpqx" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.850866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"f7712428dc551dca115f1f1a979fe56d934cb046de1bc52a85478dd47b2af74c"} Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.851148 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"a64468de988f07dc287d5d69f5162bbd435429be53ff44d1cb08f86071e2be3b"} Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.852417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerStarted","Data":"16f9af045039134433d1c81cb67c98ecad80802609a88144980e30cc8a9f68cb"} Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.852661 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.854117 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerStarted","Data":"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32"} Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.854334 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.874000 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.404966169 podStartE2EDuration="1m11.873984036s" podCreationTimestamp="2026-01-03 04:32:55 +0000 UTC" firstStartedPulling="2026-01-03 04:33:09.565862885 +0000 UTC m=+1016.682916070" lastFinishedPulling="2026-01-03 04:33:30.034880762 +0000 UTC m=+1037.151933937" observedRunningTime="2026-01-03 04:34:06.87189396 +0000 UTC m=+1073.988947145" watchObservedRunningTime="2026-01-03 04:34:06.873984036 +0000 UTC m=+1073.991037211" Jan 03 04:34:06 crc kubenswrapper[4865]: I0103 04:34:06.902429 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.864240397 podStartE2EDuration="1m10.902410727s" podCreationTimestamp="2026-01-03 04:32:56 +0000 UTC" firstStartedPulling="2026-01-03 04:33:09.671695973 +0000 UTC m=+1016.788749158" lastFinishedPulling="2026-01-03 04:33:30.709866303 +0000 UTC m=+1037.826919488" observedRunningTime="2026-01-03 04:34:06.893916217 +0000 UTC m=+1074.010969402" watchObservedRunningTime="2026-01-03 04:34:06.902410727 +0000 UTC m=+1074.019463912" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.091805 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gdpqx"] Jan 03 04:34:07 crc kubenswrapper[4865]: W0103 04:34:07.097885 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89004a40_1d1d_46ec_a342_a067fb1eaa54.slice/crio-9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855 WatchSource:0}: Error finding container 9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855: Status 404 returned error can't find the container with id 9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855 Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.453130 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zfnlj"] Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.454444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.455709 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.468340 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zfnlj"] Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.604266 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.604701 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68mp\" (UniqueName: \"kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.706214 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p68mp\" (UniqueName: \"kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.706350 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.707072 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.730168 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p68mp\" (UniqueName: \"kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp\") pod \"root-account-create-update-zfnlj\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.780261 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:07 crc kubenswrapper[4865]: I0103 04:34:07.864617 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gdpqx" event={"ID":"89004a40-1d1d-46ec-a342-a067fb1eaa54","Type":"ContainerStarted","Data":"9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855"} Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.552270 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zfnlj"] Jan 03 04:34:08 crc kubenswrapper[4865]: W0103 04:34:08.561276 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fdac8c_3df5_49ff_a6da_b1ca5e403277.slice/crio-76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b WatchSource:0}: Error finding container 76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b: Status 404 returned error can't find the container with id 76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.884767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfnlj" event={"ID":"08fdac8c-3df5-49ff-a6da-b1ca5e403277","Type":"ContainerStarted","Data":"7a8fc72b6bacae784057f7d02352f91f39a4da8e3abcdf7177c855d118cf4311"} Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.884807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfnlj" event={"ID":"08fdac8c-3df5-49ff-a6da-b1ca5e403277","Type":"ContainerStarted","Data":"76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b"} Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.899501 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zfnlj" podStartSLOduration=1.8994816669999999 podStartE2EDuration="1.899481667s" podCreationTimestamp="2026-01-03 04:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:08.898862231 +0000 UTC m=+1076.015915416" watchObservedRunningTime="2026-01-03 04:34:08.899481667 +0000 UTC m=+1076.016534862" Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.907715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"b196250dabe6075ef51ea6228e4938470572beda7dab40aff91fbd5425de5685"} Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.907760 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"b10a3e0842cb5234b6298590fd3916932c67f8bb45a3f804c470648b1ad25575"} Jan 03 04:34:08 crc kubenswrapper[4865]: I0103 04:34:08.907773 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"3c63271e0633e735af1b10941eb4c6c761964d49e81180a5f9069740c9a2d3df"} Jan 03 04:34:08 crc kubenswrapper[4865]: E0103 04:34:08.993998 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fdac8c_3df5_49ff_a6da_b1ca5e403277.slice/crio-7a8fc72b6bacae784057f7d02352f91f39a4da8e3abcdf7177c855d118cf4311.scope\": RecentStats: unable to find data in memory cache]" Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.919279 4865 generic.go:334] "Generic (PLEG): container finished" podID="08fdac8c-3df5-49ff-a6da-b1ca5e403277" containerID="7a8fc72b6bacae784057f7d02352f91f39a4da8e3abcdf7177c855d118cf4311" exitCode=0 Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.919430 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfnlj" event={"ID":"08fdac8c-3df5-49ff-a6da-b1ca5e403277","Type":"ContainerDied","Data":"7a8fc72b6bacae784057f7d02352f91f39a4da8e3abcdf7177c855d118cf4311"} Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.926554 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"8e5f3e6a5f036cefe9c900f004435daf4842fad0183a21dcc71111ae747701dd"} Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.926586 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"6e51466c638731a436d63975acf43d966c38d46a9706df85a0b09c7c7af7eee8"} Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.926597 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"0deca2219e9875e5f28e7ab1ac6b030dbf5a22e3331e7657b135cbd98a18fe0b"} Jan 03 04:34:09 crc kubenswrapper[4865]: I0103 04:34:09.926605 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"870f799a-79a7-40ff-9a9c-ecb096c9bfcb","Type":"ContainerStarted","Data":"c982a32ac97e5f63345d832b81846e9dafe8b8130193a11be4af5a42e414b6ef"} Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.018122 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.683141506 podStartE2EDuration="28.01809991s" podCreationTimestamp="2026-01-03 04:33:42 +0000 UTC" firstStartedPulling="2026-01-03 04:34:00.893254259 +0000 UTC m=+1068.010307434" lastFinishedPulling="2026-01-03 04:34:08.228212663 +0000 UTC m=+1075.345265838" observedRunningTime="2026-01-03 04:34:10.01551905 +0000 UTC m=+1077.132572265" watchObservedRunningTime="2026-01-03 04:34:10.01809991 +0000 UTC m=+1077.135153115" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.294825 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.296467 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.300683 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.306897 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354610 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354628 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.354664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlsg\" (UniqueName: \"kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456719 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlsg\" (UniqueName: \"kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456852 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.456889 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.457664 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.457722 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.457830 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.457849 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.457917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.475257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlsg\" (UniqueName: \"kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg\") pod \"dnsmasq-dns-77585f5f8c-5sdb5\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:10 crc kubenswrapper[4865]: I0103 04:34:10.656828 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.104535 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.145053 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" containerName="ovn-controller" probeResult="failure" output=< Jan 03 04:34:11 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 04:34:11 crc kubenswrapper[4865]: > Jan 03 04:34:11 crc kubenswrapper[4865]: W0103 04:34:11.159327 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08401bb5_3c30_45b4_bbf1_f963080642ce.slice/crio-80f07919a089b6d1532990c2784fa5f68b04f38001ab7474d71a4a323f79586e WatchSource:0}: Error finding container 80f07919a089b6d1532990c2784fa5f68b04f38001ab7474d71a4a323f79586e: Status 404 returned error can't find the container with id 80f07919a089b6d1532990c2784fa5f68b04f38001ab7474d71a4a323f79586e Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.228951 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.269151 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts\") pod \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.269282 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p68mp\" (UniqueName: \"kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp\") pod \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\" (UID: \"08fdac8c-3df5-49ff-a6da-b1ca5e403277\") " Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.270805 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08fdac8c-3df5-49ff-a6da-b1ca5e403277" (UID: "08fdac8c-3df5-49ff-a6da-b1ca5e403277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.274173 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp" (OuterVolumeSpecName: "kube-api-access-p68mp") pod "08fdac8c-3df5-49ff-a6da-b1ca5e403277" (UID: "08fdac8c-3df5-49ff-a6da-b1ca5e403277"). InnerVolumeSpecName "kube-api-access-p68mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.371008 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08fdac8c-3df5-49ff-a6da-b1ca5e403277-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.371273 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p68mp\" (UniqueName: \"kubernetes.io/projected/08fdac8c-3df5-49ff-a6da-b1ca5e403277-kube-api-access-p68mp\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.949101 4865 generic.go:334] "Generic (PLEG): container finished" podID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerID="da0cb86d4a10e2a10491a3d9fa328eb2da58fa88b5858471a84e571a9a4efc8f" exitCode=0 Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.949176 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" event={"ID":"08401bb5-3c30-45b4-bbf1-f963080642ce","Type":"ContainerDied","Data":"da0cb86d4a10e2a10491a3d9fa328eb2da58fa88b5858471a84e571a9a4efc8f"} Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.949201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" event={"ID":"08401bb5-3c30-45b4-bbf1-f963080642ce","Type":"ContainerStarted","Data":"80f07919a089b6d1532990c2784fa5f68b04f38001ab7474d71a4a323f79586e"} Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.953290 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zfnlj" event={"ID":"08fdac8c-3df5-49ff-a6da-b1ca5e403277","Type":"ContainerDied","Data":"76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b"} Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.953322 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e61630ebc6db06719fbf93e58f955e03ef7a514742d57ded607f7b55d4f93b" Jan 03 04:34:11 crc kubenswrapper[4865]: I0103 04:34:11.953364 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zfnlj" Jan 03 04:34:12 crc kubenswrapper[4865]: I0103 04:34:12.963253 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" event={"ID":"08401bb5-3c30-45b4-bbf1-f963080642ce","Type":"ContainerStarted","Data":"38e6418945aded7a94fafbf95afb6897f5fe03fb61dce7cfa754b469b53ea65b"} Jan 03 04:34:12 crc kubenswrapper[4865]: I0103 04:34:12.963611 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:12 crc kubenswrapper[4865]: I0103 04:34:12.984262 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podStartSLOduration=2.984239862 podStartE2EDuration="2.984239862s" podCreationTimestamp="2026-01-03 04:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:12.977696886 +0000 UTC m=+1080.094750061" watchObservedRunningTime="2026-01-03 04:34:12.984239862 +0000 UTC m=+1080.101293067" Jan 03 04:34:13 crc kubenswrapper[4865]: I0103 04:34:13.348980 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 03 04:34:13 crc kubenswrapper[4865]: I0103 04:34:13.811667 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zfnlj"] Jan 03 04:34:13 crc kubenswrapper[4865]: I0103 04:34:13.816937 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zfnlj"] Jan 03 04:34:15 crc kubenswrapper[4865]: I0103 04:34:15.167889 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fdac8c-3df5-49ff-a6da-b1ca5e403277" path="/var/lib/kubelet/pods/08fdac8c-3df5-49ff-a6da-b1ca5e403277/volumes" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.091668 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" containerName="ovn-controller" probeResult="failure" output=< Jan 03 04:34:16 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 04:34:16 crc kubenswrapper[4865]: > Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.133973 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.138357 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5hbf4" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.382487 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gpwwp-config-nvfvn"] Jan 03 04:34:16 crc kubenswrapper[4865]: E0103 04:34:16.383193 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fdac8c-3df5-49ff-a6da-b1ca5e403277" containerName="mariadb-account-create-update" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.383210 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fdac8c-3df5-49ff-a6da-b1ca5e403277" containerName="mariadb-account-create-update" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.383422 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fdac8c-3df5-49ff-a6da-b1ca5e403277" containerName="mariadb-account-create-update" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.383976 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.386423 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.394601 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gpwwp-config-nvfvn"] Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.461748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.461916 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.461949 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.461976 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4jf\" (UniqueName: \"kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.462068 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.462101 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563655 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.563669 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv4jf\" (UniqueName: \"kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.566396 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.566460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.566497 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.566557 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.567364 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.581822 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv4jf\" (UniqueName: \"kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf\") pod \"ovn-controller-gpwwp-config-nvfvn\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:16 crc kubenswrapper[4865]: I0103 04:34:16.709840 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.139537 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.423513 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rft58"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.427815 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.427879 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.431583 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rft58"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.549244 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-wkxxn"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.550277 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.563333 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-71cb-account-create-update-68wkg"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.564357 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.566056 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.580369 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wkxxn"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.580661 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtq7q\" (UniqueName: \"kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.580850 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.599518 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-71cb-account-create-update-68wkg"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.644166 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3250-account-create-update-vwhqs"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.645725 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.648642 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.657958 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3250-account-create-update-vwhqs"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.681942 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtq7q\" (UniqueName: \"kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwttr\" (UniqueName: \"kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682056 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682125 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jm6\" (UniqueName: \"kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682146 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.682806 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.707372 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtq7q\" (UniqueName: \"kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q\") pod \"cinder-db-create-rft58\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2mf\" (UniqueName: \"kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwttr\" (UniqueName: \"kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jm6\" (UniqueName: \"kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.784710 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.785512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.787129 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.793853 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rft58" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.818726 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jm6\" (UniqueName: \"kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6\") pod \"barbican-db-create-wkxxn\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.822146 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwttr\" (UniqueName: \"kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr\") pod \"barbican-71cb-account-create-update-68wkg\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.830994 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ngq5m"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.832322 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.838002 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ngq5m"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.870245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.878684 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-lcnz6"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.879551 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.880742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.884505 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.885568 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.885710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.886450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.886535 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2mf\" (UniqueName: \"kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.887315 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.888919 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-82kzt" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.900459 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lcnz6"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.905847 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2mf\" (UniqueName: \"kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf\") pod \"cinder-3250-account-create-update-vwhqs\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.943295 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3a49-account-create-update-m8d4l"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.944569 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.949912 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.960225 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a49-account-create-update-m8d4l"] Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.963091 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.989259 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.989322 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswg4\" (UniqueName: \"kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.989362 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.989500 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:17 crc kubenswrapper[4865]: I0103 04:34:17.989605 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.090794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.090864 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.090913 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnnz\" (UniqueName: \"kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.090981 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.091010 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswg4\" (UniqueName: \"kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.091031 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.091050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.091885 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.097056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.097102 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.109374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswg4\" (UniqueName: \"kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4\") pod \"keystone-db-sync-lcnz6\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.110191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx\") pod \"neutron-db-create-ngq5m\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.189975 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.192744 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnnz\" (UniqueName: \"kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.192843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.208005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.211190 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.218404 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnnz\" (UniqueName: \"kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz\") pod \"neutron-3a49-account-create-update-m8d4l\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.266262 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.805325 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xgsl9"] Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.812805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.814850 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.815484 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xgsl9"] Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.911965 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwwm\" (UniqueName: \"kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:18 crc kubenswrapper[4865]: I0103 04:34:18.912075 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:19 crc kubenswrapper[4865]: I0103 04:34:19.014189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:19 crc kubenswrapper[4865]: I0103 04:34:19.014325 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwwm\" (UniqueName: \"kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:19 crc kubenswrapper[4865]: I0103 04:34:19.015362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:19 crc kubenswrapper[4865]: I0103 04:34:19.049117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwwm\" (UniqueName: \"kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm\") pod \"root-account-create-update-xgsl9\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:19 crc kubenswrapper[4865]: I0103 04:34:19.135677 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:20 crc kubenswrapper[4865]: I0103 04:34:20.659558 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:34:20 crc kubenswrapper[4865]: I0103 04:34:20.750344 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:34:20 crc kubenswrapper[4865]: I0103 04:34:20.750628 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-mfjm9" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="dnsmasq-dns" containerID="cri-o://bcdec8602bbecd70c089f837370a7125b450ccf486e97186c06ca218c863619d" gracePeriod=10 Jan 03 04:34:21 crc kubenswrapper[4865]: I0103 04:34:21.100218 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" containerName="ovn-controller" probeResult="failure" output=< Jan 03 04:34:21 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 04:34:21 crc kubenswrapper[4865]: > Jan 03 04:34:23 crc kubenswrapper[4865]: I0103 04:34:23.300830 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mfjm9" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 03 04:34:26 crc kubenswrapper[4865]: I0103 04:34:26.081622 4865 generic.go:334] "Generic (PLEG): container finished" podID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerID="bcdec8602bbecd70c089f837370a7125b450ccf486e97186c06ca218c863619d" exitCode=0 Jan 03 04:34:26 crc kubenswrapper[4865]: I0103 04:34:26.081731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mfjm9" event={"ID":"50d764b4-196f-468f-b4e4-8a3f9f6f206c","Type":"ContainerDied","Data":"bcdec8602bbecd70c089f837370a7125b450ccf486e97186c06ca218c863619d"} Jan 03 04:34:26 crc kubenswrapper[4865]: I0103 04:34:26.107794 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gpwwp" podUID="334ea42d-9265-43f9-8c4c-fdf516746069" containerName="ovn-controller" probeResult="failure" output=< Jan 03 04:34:26 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 03 04:34:26 crc kubenswrapper[4865]: > Jan 03 04:34:27 crc kubenswrapper[4865]: E0103 04:34:27.690432 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 03 04:34:27 crc kubenswrapper[4865]: E0103 04:34:27.691009 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcjtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gdpqx_openstack(89004a40-1d1d-46ec-a342-a067fb1eaa54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:34:27 crc kubenswrapper[4865]: E0103 04:34:27.692824 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gdpqx" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.012805 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.083238 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb\") pod \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.083277 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb\") pod \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.083304 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc\") pod \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.083323 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config\") pod \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.083369 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg7b7\" (UniqueName: \"kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7\") pod \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\" (UID: \"50d764b4-196f-468f-b4e4-8a3f9f6f206c\") " Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.088733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7" (OuterVolumeSpecName: "kube-api-access-rg7b7") pod "50d764b4-196f-468f-b4e4-8a3f9f6f206c" (UID: "50d764b4-196f-468f-b4e4-8a3f9f6f206c"). InnerVolumeSpecName "kube-api-access-rg7b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.109139 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mfjm9" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.109059 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mfjm9" event={"ID":"50d764b4-196f-468f-b4e4-8a3f9f6f206c","Type":"ContainerDied","Data":"7b3f74294e77f3bf3abf1131955f529fdb3351cd414f8efc83730747ee857e04"} Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.109557 4865 scope.go:117] "RemoveContainer" containerID="bcdec8602bbecd70c089f837370a7125b450ccf486e97186c06ca218c863619d" Jan 03 04:34:28 crc kubenswrapper[4865]: E0103 04:34:28.109599 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gdpqx" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.125249 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50d764b4-196f-468f-b4e4-8a3f9f6f206c" (UID: "50d764b4-196f-468f-b4e4-8a3f9f6f206c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.129288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config" (OuterVolumeSpecName: "config") pod "50d764b4-196f-468f-b4e4-8a3f9f6f206c" (UID: "50d764b4-196f-468f-b4e4-8a3f9f6f206c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.142207 4865 scope.go:117] "RemoveContainer" containerID="c96e93aaf2a4c21e5bf43c358e0ce7801f77c852d3edc7050da4c0ab7e291b76" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.151138 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50d764b4-196f-468f-b4e4-8a3f9f6f206c" (UID: "50d764b4-196f-468f-b4e4-8a3f9f6f206c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.158695 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50d764b4-196f-468f-b4e4-8a3f9f6f206c" (UID: "50d764b4-196f-468f-b4e4-8a3f9f6f206c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.185107 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.185124 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg7b7\" (UniqueName: \"kubernetes.io/projected/50d764b4-196f-468f-b4e4-8a3f9f6f206c-kube-api-access-rg7b7\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.185133 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.185141 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.185149 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d764b4-196f-468f-b4e4-8a3f9f6f206c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.238349 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lcnz6"] Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.370282 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gpwwp-config-nvfvn"] Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.371726 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c151253_1fe5_4e64_b57f_9b8054e5b654.slice/crio-09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef WatchSource:0}: Error finding container 09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef: Status 404 returned error can't find the container with id 09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.463480 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.473736 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mfjm9"] Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.481332 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rft58"] Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.496811 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3a49-account-create-update-m8d4l"] Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.498423 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb717539_b1e5_4a67_97d4_2632d4e7fd7e.slice/crio-5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973 WatchSource:0}: Error finding container 5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973: Status 404 returned error can't find the container with id 5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973 Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.508950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3250-account-create-update-vwhqs"] Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.513057 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb440cf4_6919_4524_a661_1ce3b3f009b0.slice/crio-32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f WatchSource:0}: Error finding container 32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f: Status 404 returned error can't find the container with id 32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.514276 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ngq5m"] Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.516315 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20f96647_4c0b_483e_b0aa_a2aa6069ef3c.slice/crio-3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800 WatchSource:0}: Error finding container 3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800: Status 404 returned error can't find the container with id 3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800 Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.518544 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc68fb97f_4d5d_4258_bd60_882e5aa61ba7.slice/crio-0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c WatchSource:0}: Error finding container 0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c: Status 404 returned error can't find the container with id 0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.519333 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wkxxn"] Jan 03 04:34:28 crc kubenswrapper[4865]: W0103 04:34:28.519877 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e1d21f6_4e5a_407d_be7d_ca8d2fcdf24a.slice/crio-e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad WatchSource:0}: Error finding container e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad: Status 404 returned error can't find the container with id e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.532100 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xgsl9"] Jan 03 04:34:28 crc kubenswrapper[4865]: I0103 04:34:28.547632 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-71cb-account-create-update-68wkg"] Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.117828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-71cb-account-create-update-68wkg" event={"ID":"38befb89-f7ad-4e15-bfa9-d54cb0595e97","Type":"ContainerStarted","Data":"c0fdf93a2f8e30903b52595c780ccd9edd80cbdc5b0eceedb849b0ba9250799a"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.118151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-71cb-account-create-update-68wkg" event={"ID":"38befb89-f7ad-4e15-bfa9-d54cb0595e97","Type":"ContainerStarted","Data":"51d849bafe967c5e1d30ae8e19db2e7380c720964046474d5141853f603460b1"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.121718 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb717539-b1e5-4a67-97d4-2632d4e7fd7e" containerID="9fc856c31e68579a136c1c8dfb4dce21c98431c2b57815f797e5663e4b35c1eb" exitCode=0 Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.121826 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rft58" event={"ID":"fb717539-b1e5-4a67-97d4-2632d4e7fd7e","Type":"ContainerDied","Data":"9fc856c31e68579a136c1c8dfb4dce21c98431c2b57815f797e5663e4b35c1eb"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.121854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rft58" event={"ID":"fb717539-b1e5-4a67-97d4-2632d4e7fd7e","Type":"ContainerStarted","Data":"5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.123491 4865 generic.go:334] "Generic (PLEG): container finished" podID="c68fb97f-4d5d-4258-bd60-882e5aa61ba7" containerID="828a08042a0b43b49d7f5c253155a3e741f7700266797d021f2d501dfdcc8625" exitCode=0 Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.123552 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ngq5m" event={"ID":"c68fb97f-4d5d-4258-bd60-882e5aa61ba7","Type":"ContainerDied","Data":"828a08042a0b43b49d7f5c253155a3e741f7700266797d021f2d501dfdcc8625"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.123575 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ngq5m" event={"ID":"c68fb97f-4d5d-4258-bd60-882e5aa61ba7","Type":"ContainerStarted","Data":"0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.125233 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lcnz6" event={"ID":"0acdff76-f952-437b-a36d-d1469377e304","Type":"ContainerStarted","Data":"cd8d0ce9d3e98b86df583ee25059f7afcbdf10c04a250a1cc87898bd6a4d89aa"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.127213 4865 generic.go:334] "Generic (PLEG): container finished" podID="db440cf4-6919-4524-a661-1ce3b3f009b0" containerID="e691228bd9c8a7b0088726cba7bb4717e82caa24c0cccd9750d6da634a082b38" exitCode=0 Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.127298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3250-account-create-update-vwhqs" event={"ID":"db440cf4-6919-4524-a661-1ce3b3f009b0","Type":"ContainerDied","Data":"e691228bd9c8a7b0088726cba7bb4717e82caa24c0cccd9750d6da634a082b38"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.127372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3250-account-create-update-vwhqs" event={"ID":"db440cf4-6919-4524-a661-1ce3b3f009b0","Type":"ContainerStarted","Data":"32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.133358 4865 generic.go:334] "Generic (PLEG): container finished" podID="56ba6a0b-8b85-4722-8be7-1d059c17f147" containerID="2a1f8f0b04457a081dc0b02ec2ed4189dc8ccab348be17523df59c84674e75cb" exitCode=0 Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.133443 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a49-account-create-update-m8d4l" event={"ID":"56ba6a0b-8b85-4722-8be7-1d059c17f147","Type":"ContainerDied","Data":"2a1f8f0b04457a081dc0b02ec2ed4189dc8ccab348be17523df59c84674e75cb"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.133465 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a49-account-create-update-m8d4l" event={"ID":"56ba6a0b-8b85-4722-8be7-1d059c17f147","Type":"ContainerStarted","Data":"d3c559d504c17098ba0c64a89f8e5ef903528213c03cb69bf8b350af5fcbb957"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.136203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wkxxn" event={"ID":"20f96647-4c0b-483e-b0aa-a2aa6069ef3c","Type":"ContainerStarted","Data":"0c70a0b9453adcdc529eda92ad38e7eb23c3f27fe685338ce5cb6f9424521a78"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.136229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wkxxn" event={"ID":"20f96647-4c0b-483e-b0aa-a2aa6069ef3c","Type":"ContainerStarted","Data":"3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.136572 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-71cb-account-create-update-68wkg" podStartSLOduration=12.136555209 podStartE2EDuration="12.136555209s" podCreationTimestamp="2026-01-03 04:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:29.12884228 +0000 UTC m=+1096.245895465" watchObservedRunningTime="2026-01-03 04:34:29.136555209 +0000 UTC m=+1096.253608404" Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.137485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xgsl9" event={"ID":"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a","Type":"ContainerStarted","Data":"83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.137511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xgsl9" event={"ID":"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a","Type":"ContainerStarted","Data":"e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.140653 4865 generic.go:334] "Generic (PLEG): container finished" podID="9c151253-1fe5-4e64-b57f-9b8054e5b654" containerID="09cdd10ee90e48a30782504c4df7c1f1f428bda24ab1e22bb6836c4ab23b8507" exitCode=0 Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.140697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gpwwp-config-nvfvn" event={"ID":"9c151253-1fe5-4e64-b57f-9b8054e5b654","Type":"ContainerDied","Data":"09cdd10ee90e48a30782504c4df7c1f1f428bda24ab1e22bb6836c4ab23b8507"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.140725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gpwwp-config-nvfvn" event={"ID":"9c151253-1fe5-4e64-b57f-9b8054e5b654","Type":"ContainerStarted","Data":"09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef"} Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.169899 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" path="/var/lib/kubelet/pods/50d764b4-196f-468f-b4e4-8a3f9f6f206c/volumes" Jan 03 04:34:29 crc kubenswrapper[4865]: I0103 04:34:29.212059 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xgsl9" podStartSLOduration=11.212043353 podStartE2EDuration="11.212043353s" podCreationTimestamp="2026-01-03 04:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:29.205103936 +0000 UTC m=+1096.322157131" watchObservedRunningTime="2026-01-03 04:34:29.212043353 +0000 UTC m=+1096.329096538" Jan 03 04:34:29 crc kubenswrapper[4865]: E0103 04:34:29.406981 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e1d21f6_4e5a_407d_be7d_ca8d2fcdf24a.slice/crio-conmon-83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb440cf4_6919_4524_a661_1ce3b3f009b0.slice/crio-conmon-e691228bd9c8a7b0088726cba7bb4717e82caa24c0cccd9750d6da634a082b38.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e1d21f6_4e5a_407d_be7d_ca8d2fcdf24a.slice/crio-83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31.scope\": RecentStats: unable to find data in memory cache]" Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.154470 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-71cb-account-create-update-68wkg" event={"ID":"38befb89-f7ad-4e15-bfa9-d54cb0595e97","Type":"ContainerDied","Data":"c0fdf93a2f8e30903b52595c780ccd9edd80cbdc5b0eceedb849b0ba9250799a"} Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.154366 4865 generic.go:334] "Generic (PLEG): container finished" podID="38befb89-f7ad-4e15-bfa9-d54cb0595e97" containerID="c0fdf93a2f8e30903b52595c780ccd9edd80cbdc5b0eceedb849b0ba9250799a" exitCode=0 Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.157419 4865 generic.go:334] "Generic (PLEG): container finished" podID="20f96647-4c0b-483e-b0aa-a2aa6069ef3c" containerID="0c70a0b9453adcdc529eda92ad38e7eb23c3f27fe685338ce5cb6f9424521a78" exitCode=0 Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.157504 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wkxxn" event={"ID":"20f96647-4c0b-483e-b0aa-a2aa6069ef3c","Type":"ContainerDied","Data":"0c70a0b9453adcdc529eda92ad38e7eb23c3f27fe685338ce5cb6f9424521a78"} Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.158878 4865 generic.go:334] "Generic (PLEG): container finished" podID="9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" containerID="83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31" exitCode=0 Jan 03 04:34:30 crc kubenswrapper[4865]: I0103 04:34:30.158968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xgsl9" event={"ID":"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a","Type":"ContainerDied","Data":"83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31"} Jan 03 04:34:31 crc kubenswrapper[4865]: I0103 04:34:31.095151 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gpwwp" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.073684 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.083012 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.095836 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.121646 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.123703 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rft58" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.139625 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.193351 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ngq5m" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.197581 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3250-account-create-update-vwhqs" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts\") pod \"db440cf4-6919-4524-a661-1ce3b3f009b0\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwttr\" (UniqueName: \"kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr\") pod \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198196 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts\") pod \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx\") pod \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwwm\" (UniqueName: \"kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm\") pod \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198355 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb2mf\" (UniqueName: \"kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf\") pod \"db440cf4-6919-4524-a661-1ce3b3f009b0\" (UID: \"db440cf4-6919-4524-a661-1ce3b3f009b0\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198374 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts\") pod \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\" (UID: \"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts\") pod \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\" (UID: \"c68fb97f-4d5d-4258-bd60-882e5aa61ba7\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198443 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtq7q\" (UniqueName: \"kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q\") pod \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\" (UID: \"fb717539-b1e5-4a67-97d4-2632d4e7fd7e\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198460 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts\") pod \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\" (UID: \"38befb89-f7ad-4e15-bfa9-d54cb0595e97\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198612 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db440cf4-6919-4524-a661-1ce3b3f009b0" (UID: "db440cf4-6919-4524-a661-1ce3b3f009b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198965 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38befb89-f7ad-4e15-bfa9-d54cb0595e97" (UID: "38befb89-f7ad-4e15-bfa9-d54cb0595e97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.198975 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db440cf4-6919-4524-a661-1ce3b3f009b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.199350 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" (UID: "9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.199683 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c68fb97f-4d5d-4258-bd60-882e5aa61ba7" (UID: "c68fb97f-4d5d-4258-bd60-882e5aa61ba7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.200092 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb717539-b1e5-4a67-97d4-2632d4e7fd7e" (UID: "fb717539-b1e5-4a67-97d4-2632d4e7fd7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.200306 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3a49-account-create-update-m8d4l" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.201809 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-71cb-account-create-update-68wkg" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.203032 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx" (OuterVolumeSpecName: "kube-api-access-cq9zx") pod "c68fb97f-4d5d-4258-bd60-882e5aa61ba7" (UID: "c68fb97f-4d5d-4258-bd60-882e5aa61ba7"). InnerVolumeSpecName "kube-api-access-cq9zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.203089 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rft58" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.203117 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr" (OuterVolumeSpecName: "kube-api-access-bwttr") pod "38befb89-f7ad-4e15-bfa9-d54cb0595e97" (UID: "38befb89-f7ad-4e15-bfa9-d54cb0595e97"). InnerVolumeSpecName "kube-api-access-bwttr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.203415 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf" (OuterVolumeSpecName: "kube-api-access-fb2mf") pod "db440cf4-6919-4524-a661-1ce3b3f009b0" (UID: "db440cf4-6919-4524-a661-1ce3b3f009b0"). InnerVolumeSpecName "kube-api-access-fb2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.206608 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xgsl9" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.206891 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm" (OuterVolumeSpecName: "kube-api-access-ddwwm") pod "9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" (UID: "9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a"). InnerVolumeSpecName "kube-api-access-ddwwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.228750 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q" (OuterVolumeSpecName: "kube-api-access-xtq7q") pod "fb717539-b1e5-4a67-97d4-2632d4e7fd7e" (UID: "fb717539-b1e5-4a67-97d4-2632d4e7fd7e"). InnerVolumeSpecName "kube-api-access-xtq7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.261255 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.269797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ngq5m" event={"ID":"c68fb97f-4d5d-4258-bd60-882e5aa61ba7","Type":"ContainerDied","Data":"0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.269871 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cc3637985c9de77739da3f7ac1678794b7ab04a79eb30aeba84c8f309034b7c" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.269913 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gpwwp-config-nvfvn" event={"ID":"9c151253-1fe5-4e64-b57f-9b8054e5b654","Type":"ContainerDied","Data":"09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.269948 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09db94fb2392479a2577689d188866c7725a7e26bf7dc285b56ade36e2d99aef" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.269962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3250-account-create-update-vwhqs" event={"ID":"db440cf4-6919-4524-a661-1ce3b3f009b0","Type":"ContainerDied","Data":"32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270001 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c0abada91fc52c3760d66b23cf389eddee3a623a758d11728a67512f9cca5f" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3a49-account-create-update-m8d4l" event={"ID":"56ba6a0b-8b85-4722-8be7-1d059c17f147","Type":"ContainerDied","Data":"d3c559d504c17098ba0c64a89f8e5ef903528213c03cb69bf8b350af5fcbb957"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270028 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c559d504c17098ba0c64a89f8e5ef903528213c03cb69bf8b350af5fcbb957" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270040 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-71cb-account-create-update-68wkg" event={"ID":"38befb89-f7ad-4e15-bfa9-d54cb0595e97","Type":"ContainerDied","Data":"51d849bafe967c5e1d30ae8e19db2e7380c720964046474d5141853f603460b1"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270074 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d849bafe967c5e1d30ae8e19db2e7380c720964046474d5141853f603460b1" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rft58" event={"ID":"fb717539-b1e5-4a67-97d4-2632d4e7fd7e","Type":"ContainerDied","Data":"5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270104 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec09d215d472861e27f67242670f3e7dbcaf510580258b60726c17b2c879973" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wkxxn" event={"ID":"20f96647-4c0b-483e-b0aa-a2aa6069ef3c","Type":"ContainerDied","Data":"3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270158 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddad3ebc12e164dfea446d2914539fc440dae267d3b191020b263a8aab7e800" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270170 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xgsl9" event={"ID":"9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a","Type":"ContainerDied","Data":"e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad"} Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.270183 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90de2d09af7b0e57ba1fc68ab668882020a2762b488488390609a9fcfc614ad" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.300460 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnnz\" (UniqueName: \"kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz\") pod \"56ba6a0b-8b85-4722-8be7-1d059c17f147\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.300748 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts\") pod \"56ba6a0b-8b85-4722-8be7-1d059c17f147\" (UID: \"56ba6a0b-8b85-4722-8be7-1d059c17f147\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301310 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwttr\" (UniqueName: \"kubernetes.io/projected/38befb89-f7ad-4e15-bfa9-d54cb0595e97-kube-api-access-bwttr\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301325 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301335 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq9zx\" (UniqueName: \"kubernetes.io/projected/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-kube-api-access-cq9zx\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301344 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwwm\" (UniqueName: \"kubernetes.io/projected/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-kube-api-access-ddwwm\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301355 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb2mf\" (UniqueName: \"kubernetes.io/projected/db440cf4-6919-4524-a661-1ce3b3f009b0-kube-api-access-fb2mf\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301364 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301373 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c68fb97f-4d5d-4258-bd60-882e5aa61ba7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301407 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtq7q\" (UniqueName: \"kubernetes.io/projected/fb717539-b1e5-4a67-97d4-2632d4e7fd7e-kube-api-access-xtq7q\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.301416 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38befb89-f7ad-4e15-bfa9-d54cb0595e97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.303456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56ba6a0b-8b85-4722-8be7-1d059c17f147" (UID: "56ba6a0b-8b85-4722-8be7-1d059c17f147"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.305086 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.305578 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz" (OuterVolumeSpecName: "kube-api-access-7dnnz") pod "56ba6a0b-8b85-4722-8be7-1d059c17f147" (UID: "56ba6a0b-8b85-4722-8be7-1d059c17f147"). InnerVolumeSpecName "kube-api-access-7dnnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.402687 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.402827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv4jf\" (UniqueName: \"kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.402889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5jm6\" (UniqueName: \"kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6\") pod \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.402852 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run" (OuterVolumeSpecName: "var-run") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.402951 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404189 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404302 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts\") pod \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\" (UID: \"20f96647-4c0b-483e-b0aa-a2aa6069ef3c\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404362 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404376 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts" (OuterVolumeSpecName: "scripts") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404408 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn\") pod \"9c151253-1fe5-4e64-b57f-9b8054e5b654\" (UID: \"9c151253-1fe5-4e64-b57f-9b8054e5b654\") " Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404682 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404734 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404767 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404923 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnnz\" (UniqueName: \"kubernetes.io/projected/56ba6a0b-8b85-4722-8be7-1d059c17f147-kube-api-access-7dnnz\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404947 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404961 4865 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c151253-1fe5-4e64-b57f-9b8054e5b654-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404974 4865 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404986 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ba6a0b-8b85-4722-8be7-1d059c17f147-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.404998 4865 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.405008 4865 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c151253-1fe5-4e64-b57f-9b8054e5b654-var-run\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.405411 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20f96647-4c0b-483e-b0aa-a2aa6069ef3c" (UID: "20f96647-4c0b-483e-b0aa-a2aa6069ef3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.407647 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6" (OuterVolumeSpecName: "kube-api-access-k5jm6") pod "20f96647-4c0b-483e-b0aa-a2aa6069ef3c" (UID: "20f96647-4c0b-483e-b0aa-a2aa6069ef3c"). InnerVolumeSpecName "kube-api-access-k5jm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.409708 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf" (OuterVolumeSpecName: "kube-api-access-rv4jf") pod "9c151253-1fe5-4e64-b57f-9b8054e5b654" (UID: "9c151253-1fe5-4e64-b57f-9b8054e5b654"). InnerVolumeSpecName "kube-api-access-rv4jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.506647 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.506708 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv4jf\" (UniqueName: \"kubernetes.io/projected/9c151253-1fe5-4e64-b57f-9b8054e5b654-kube-api-access-rv4jf\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:33 crc kubenswrapper[4865]: I0103 04:34:33.506731 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5jm6\" (UniqueName: \"kubernetes.io/projected/20f96647-4c0b-483e-b0aa-a2aa6069ef3c-kube-api-access-k5jm6\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.218447 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gpwwp-config-nvfvn" Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.218474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lcnz6" event={"ID":"0acdff76-f952-437b-a36d-d1469377e304","Type":"ContainerStarted","Data":"c2bd731edf6964af3b3b09f7bd761645a291c31207168a1dd85d67e538b2b57e"} Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.218710 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wkxxn" Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.237717 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-lcnz6" podStartSLOduration=12.490036086 podStartE2EDuration="17.237697689s" podCreationTimestamp="2026-01-03 04:34:17 +0000 UTC" firstStartedPulling="2026-01-03 04:34:28.241994555 +0000 UTC m=+1095.359047740" lastFinishedPulling="2026-01-03 04:34:32.989656148 +0000 UTC m=+1100.106709343" observedRunningTime="2026-01-03 04:34:34.23180795 +0000 UTC m=+1101.348861145" watchObservedRunningTime="2026-01-03 04:34:34.237697689 +0000 UTC m=+1101.354750894" Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.414083 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gpwwp-config-nvfvn"] Jan 03 04:34:34 crc kubenswrapper[4865]: I0103 04:34:34.420860 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gpwwp-config-nvfvn"] Jan 03 04:34:35 crc kubenswrapper[4865]: I0103 04:34:35.169923 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c151253-1fe5-4e64-b57f-9b8054e5b654" path="/var/lib/kubelet/pods/9c151253-1fe5-4e64-b57f-9b8054e5b654/volumes" Jan 03 04:34:38 crc kubenswrapper[4865]: I0103 04:34:38.250482 4865 generic.go:334] "Generic (PLEG): container finished" podID="0acdff76-f952-437b-a36d-d1469377e304" containerID="c2bd731edf6964af3b3b09f7bd761645a291c31207168a1dd85d67e538b2b57e" exitCode=0 Jan 03 04:34:38 crc kubenswrapper[4865]: I0103 04:34:38.250773 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lcnz6" event={"ID":"0acdff76-f952-437b-a36d-d1469377e304","Type":"ContainerDied","Data":"c2bd731edf6964af3b3b09f7bd761645a291c31207168a1dd85d67e538b2b57e"} Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.581550 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.704798 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle\") pod \"0acdff76-f952-437b-a36d-d1469377e304\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.704895 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data\") pod \"0acdff76-f952-437b-a36d-d1469377e304\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.704935 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xswg4\" (UniqueName: \"kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4\") pod \"0acdff76-f952-437b-a36d-d1469377e304\" (UID: \"0acdff76-f952-437b-a36d-d1469377e304\") " Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.710011 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4" (OuterVolumeSpecName: "kube-api-access-xswg4") pod "0acdff76-f952-437b-a36d-d1469377e304" (UID: "0acdff76-f952-437b-a36d-d1469377e304"). InnerVolumeSpecName "kube-api-access-xswg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.728450 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0acdff76-f952-437b-a36d-d1469377e304" (UID: "0acdff76-f952-437b-a36d-d1469377e304"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.750552 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data" (OuterVolumeSpecName: "config-data") pod "0acdff76-f952-437b-a36d-d1469377e304" (UID: "0acdff76-f952-437b-a36d-d1469377e304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.806933 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.806960 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0acdff76-f952-437b-a36d-d1469377e304-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:39 crc kubenswrapper[4865]: I0103 04:34:39.806970 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xswg4\" (UniqueName: \"kubernetes.io/projected/0acdff76-f952-437b-a36d-d1469377e304-kube-api-access-xswg4\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.270397 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lcnz6" event={"ID":"0acdff76-f952-437b-a36d-d1469377e304","Type":"ContainerDied","Data":"cd8d0ce9d3e98b86df583ee25059f7afcbdf10c04a250a1cc87898bd6a4d89aa"} Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.270438 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8d0ce9d3e98b86df583ee25059f7afcbdf10c04a250a1cc87898bd6a4d89aa" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.270409 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lcnz6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.273190 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gdpqx" event={"ID":"89004a40-1d1d-46ec-a342-a067fb1eaa54","Type":"ContainerStarted","Data":"8b2928d909656017cdf32ea61094feb8f81b6a8956aa5d9a6b86ad93fbba6522"} Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.297939 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gdpqx" podStartSLOduration=1.726254824 podStartE2EDuration="34.297916451s" podCreationTimestamp="2026-01-03 04:34:06 +0000 UTC" firstStartedPulling="2026-01-03 04:34:07.099837215 +0000 UTC m=+1074.216890400" lastFinishedPulling="2026-01-03 04:34:39.671498832 +0000 UTC m=+1106.788552027" observedRunningTime="2026-01-03 04:34:40.295707952 +0000 UTC m=+1107.412761137" watchObservedRunningTime="2026-01-03 04:34:40.297916451 +0000 UTC m=+1107.414969656" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.540611 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6rk4f"] Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.541899 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ba6a0b-8b85-4722-8be7-1d059c17f147" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.542020 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ba6a0b-8b85-4722-8be7-1d059c17f147" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.542177 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.542287 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.542438 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="dnsmasq-dns" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.542546 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="dnsmasq-dns" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.542676 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68fb97f-4d5d-4258-bd60-882e5aa61ba7" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.542786 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68fb97f-4d5d-4258-bd60-882e5aa61ba7" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.542909 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acdff76-f952-437b-a36d-d1469377e304" containerName="keystone-db-sync" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.543007 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acdff76-f952-437b-a36d-d1469377e304" containerName="keystone-db-sync" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.543101 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38befb89-f7ad-4e15-bfa9-d54cb0595e97" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.543190 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="38befb89-f7ad-4e15-bfa9-d54cb0595e97" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.543315 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db440cf4-6919-4524-a661-1ce3b3f009b0" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.543430 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="db440cf4-6919-4524-a661-1ce3b3f009b0" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.543538 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c151253-1fe5-4e64-b57f-9b8054e5b654" containerName="ovn-config" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.543661 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c151253-1fe5-4e64-b57f-9b8054e5b654" containerName="ovn-config" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.543759 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f96647-4c0b-483e-b0aa-a2aa6069ef3c" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.543891 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f96647-4c0b-483e-b0aa-a2aa6069ef3c" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.544012 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb717539-b1e5-4a67-97d4-2632d4e7fd7e" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.544157 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb717539-b1e5-4a67-97d4-2632d4e7fd7e" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: E0103 04:34:40.544395 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="init" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.544585 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="init" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.545207 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb717539-b1e5-4a67-97d4-2632d4e7fd7e" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.545421 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68fb97f-4d5d-4258-bd60-882e5aa61ba7" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.545540 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d764b4-196f-468f-b4e4-8a3f9f6f206c" containerName="dnsmasq-dns" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.545759 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.545871 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f96647-4c0b-483e-b0aa-a2aa6069ef3c" containerName="mariadb-database-create" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.546012 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acdff76-f952-437b-a36d-d1469377e304" containerName="keystone-db-sync" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.546133 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ba6a0b-8b85-4722-8be7-1d059c17f147" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.546275 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="db440cf4-6919-4524-a661-1ce3b3f009b0" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.546372 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c151253-1fe5-4e64-b57f-9b8054e5b654" containerName="ovn-config" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.546511 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="38befb89-f7ad-4e15-bfa9-d54cb0595e97" containerName="mariadb-account-create-update" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.547475 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.549694 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.549950 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.550143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.550311 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-82kzt" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.553513 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.567777 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6rk4f"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.575802 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.583520 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.603494 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621236 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621314 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621438 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621562 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4v6\" (UniqueName: \"kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.621584 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.685527 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.687186 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.689331 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-qjggf" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.689475 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.689581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.689697 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.699746 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fhrp6"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.700847 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.702519 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.703655 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wk8cr" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.704188 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.704640 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723374 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723494 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4v6\" (UniqueName: \"kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723577 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723602 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723624 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723649 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56v5c\" (UniqueName: \"kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723673 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.723805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.724340 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.724447 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fhrp6"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.732172 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.732904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.733823 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.736055 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.738249 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.757880 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4v6\" (UniqueName: \"kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6\") pod \"keystone-bootstrap-6rk4f\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.796165 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ll974"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.797160 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.805115 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-smtl4" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.805271 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.805809 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.816450 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.818280 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.820770 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.822012 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.825608 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.825831 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.825905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.825942 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56v5c\" (UniqueName: \"kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.825967 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826002 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826075 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826101 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbbb\" (UniqueName: \"kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826163 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826191 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7t4\" (UniqueName: \"kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826255 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.826302 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.829819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.831573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.832571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.832685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.838066 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.847728 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.847802 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ll974"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.872741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.873903 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56v5c\" (UniqueName: \"kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c\") pod \"dnsmasq-dns-55fff446b9-sfhx6\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.898582 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6rcv2"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.899606 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.904277 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.913840 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pghzp" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.914016 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931455 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcfp\" (UniqueName: \"kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931505 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931556 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931575 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931594 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931613 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g45l\" (UniqueName: \"kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931684 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931731 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbbb\" (UniqueName: \"kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r7t4\" (UniqueName: \"kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931798 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931826 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931844 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931893 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.931911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.944316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.944616 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.944984 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6rcv2"] Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.945460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.950094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.952431 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:40 crc kubenswrapper[4865]: I0103 04:34:40.960791 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:40.993800 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbbb\" (UniqueName: \"kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb\") pod \"neutron-db-sync-fhrp6\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:40.995667 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r7t4\" (UniqueName: \"kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4\") pod \"horizon-6487489875-pjsvm\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.004646 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.020079 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.024723 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.025005 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.025022 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.036270 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.036827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037191 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037216 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037252 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037284 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcfp\" (UniqueName: \"kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037310 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037329 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037391 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2f9m\" (UniqueName: \"kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037410 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037431 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037448 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g45l\" (UniqueName: \"kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037506 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.037536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.038328 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.040610 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.042259 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.042995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.044038 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.044871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.044998 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.047969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.048233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.048374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.052163 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.062278 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcfp\" (UniqueName: \"kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp\") pod \"ceilometer-0\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.066705 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g45l\" (UniqueName: \"kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l\") pod \"cinder-db-sync-ll974\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.108861 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.111316 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.124628 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.125014 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ll974" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.140032 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jbm\" (UniqueName: \"kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.140079 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.140133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.140157 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.140185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.144928 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m8g98"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.148291 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.148944 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.149646 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.149705 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.149729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2f9m\" (UniqueName: \"kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.155011 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.155237 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.156672 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8q7vz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.186286 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2f9m\" (UniqueName: \"kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.202101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.207128 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data\") pod \"barbican-db-sync-6rcv2\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.213286 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m8g98"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255118 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jbm\" (UniqueName: \"kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255200 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255242 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255299 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255322 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhwx\" (UniqueName: \"kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.255363 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvlh\" (UniqueName: \"kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.256302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.256599 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.256663 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.256702 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.256778 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257079 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257212 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257263 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257335 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.257947 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.258674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.262050 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.276944 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jbm\" (UniqueName: \"kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm\") pod \"horizon-6d8d4694c7-2sb2k\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.293550 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.354755 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360309 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzhwx\" (UniqueName: \"kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360418 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccvlh\" (UniqueName: \"kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360437 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360542 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360573 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360590 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.360616 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.361351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.362844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.365190 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.366063 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.366249 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.366574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.368858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.366495 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.377728 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.381268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzhwx\" (UniqueName: \"kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx\") pod \"placement-db-sync-m8g98\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.381841 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccvlh\" (UniqueName: \"kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh\") pod \"dnsmasq-dns-76fcf4b695-bd7qz\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.462819 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.492957 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m8g98" Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.520116 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6rk4f"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.538077 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.717599 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.802411 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ll974"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.818436 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.826145 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fhrp6"] Jan 03 04:34:41 crc kubenswrapper[4865]: W0103 04:34:41.885351 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod831a0414_a201_4718_8f29_453a55b575b0.slice/crio-c08dd97a0a0d635577bde9915bc7127f470cdcfb95c42fdc6d688401eee7be00 WatchSource:0}: Error finding container c08dd97a0a0d635577bde9915bc7127f470cdcfb95c42fdc6d688401eee7be00: Status 404 returned error can't find the container with id c08dd97a0a0d635577bde9915bc7127f470cdcfb95c42fdc6d688401eee7be00 Jan 03 04:34:41 crc kubenswrapper[4865]: I0103 04:34:41.928366 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6rcv2"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.309006 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6rcv2" event={"ID":"d16da42a-8750-476c-abdf-8054eca2694a","Type":"ContainerStarted","Data":"cc40d0c54b353e32c830148c43cbb00d5f3966112c668367f24f2b13df8ad48c"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.310901 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487489875-pjsvm" event={"ID":"831a0414-a201-4718-8f29-453a55b575b0","Type":"ContainerStarted","Data":"c08dd97a0a0d635577bde9915bc7127f470cdcfb95c42fdc6d688401eee7be00"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.312004 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fhrp6" event={"ID":"49719dec-6060-4d9b-ad15-fbeac83d7ab1","Type":"ContainerStarted","Data":"f0feac30f6c07268753b0022f164d7b7e1a0cf97b7a9c142bfbcdc8979567786"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.313068 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" event={"ID":"865da7b4-c089-4081-a3bc-51f245592dbb","Type":"ContainerStarted","Data":"3646ac7b7ae5e3ee3b1bb8c8dfff9504cc55f730718abc00e6af9c732a5eb342"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.314330 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ll974" event={"ID":"394d36aa-4f2f-4f5f-a904-1fb372f2de27","Type":"ContainerStarted","Data":"10345bebc05487832f0a76ce5034622ed24c0fec8372150b4688c25ca1f7b5ed"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.315247 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6rk4f" event={"ID":"b34876e9-7919-413e-83b3-6ee30560e822","Type":"ContainerStarted","Data":"dd826edf36b3bc4c9a2c9d2e5e77d807a7773ea3fcac7f9fa8952ddf9e550c4e"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.316493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerStarted","Data":"c47cffdeea4c6d344246b555e2799fce55205d4b8d9c4cdc90129b840b544823"} Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.415419 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:34:42 crc kubenswrapper[4865]: W0103 04:34:42.417520 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef3c2c2_d5de_4735_a7bd_dad385be255c.slice/crio-959613f4b059f2b0c37704a9c66f97d421e628d3f57668d064cfcb3e2989d8d9 WatchSource:0}: Error finding container 959613f4b059f2b0c37704a9c66f97d421e628d3f57668d064cfcb3e2989d8d9: Status 404 returned error can't find the container with id 959613f4b059f2b0c37704a9c66f97d421e628d3f57668d064cfcb3e2989d8d9 Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.424938 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:34:42 crc kubenswrapper[4865]: W0103 04:34:42.425615 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40bb65d_6cf3_4133_9fd6_1aa59f68ba2a.slice/crio-ef2ec4b6ec903180d6336768df4aea4d6dba6d6329175e31026a0a9ac4c232cc WatchSource:0}: Error finding container ef2ec4b6ec903180d6336768df4aea4d6dba6d6329175e31026a0a9ac4c232cc: Status 404 returned error can't find the container with id ef2ec4b6ec903180d6336768df4aea4d6dba6d6329175e31026a0a9ac4c232cc Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.550652 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m8g98"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.802046 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.809846 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.835867 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.837809 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.858872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.886861 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.886950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.886989 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.887026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.887053 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxjh\" (UniqueName: \"kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.988870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.988933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.989019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fxjh\" (UniqueName: \"kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.989059 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.989159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.989635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.990244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.990642 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:42 crc kubenswrapper[4865]: I0103 04:34:42.999112 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.005115 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fxjh\" (UniqueName: \"kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh\") pod \"horizon-5b6bf49cd5-zkq6g\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.156243 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.336361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8d4694c7-2sb2k" event={"ID":"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a","Type":"ContainerStarted","Data":"ef2ec4b6ec903180d6336768df4aea4d6dba6d6329175e31026a0a9ac4c232cc"} Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.341467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" event={"ID":"bef3c2c2-d5de-4735-a7bd-dad385be255c","Type":"ContainerStarted","Data":"959613f4b059f2b0c37704a9c66f97d421e628d3f57668d064cfcb3e2989d8d9"} Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.345027 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m8g98" event={"ID":"d4902a49-2ac7-4172-9f70-b4b14dfb7d67","Type":"ContainerStarted","Data":"52f49351841b2aa7dc5f24403ae98c6a9cac35a6be71cd7882efffba00ad12d9"} Jan 03 04:34:43 crc kubenswrapper[4865]: I0103 04:34:43.651580 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.364008 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fhrp6" event={"ID":"49719dec-6060-4d9b-ad15-fbeac83d7ab1","Type":"ContainerStarted","Data":"a8151f80e3f0bfd14c0e8015fac2ad0a5ec57c380d318ca41181a991a4fe56bc"} Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.379673 4865 generic.go:334] "Generic (PLEG): container finished" podID="865da7b4-c089-4081-a3bc-51f245592dbb" containerID="458ac61e8543f5c0662811c69c914c0282d658f1b7e8cd4fec775f8513dfe528" exitCode=0 Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.379721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" event={"ID":"865da7b4-c089-4081-a3bc-51f245592dbb","Type":"ContainerDied","Data":"458ac61e8543f5c0662811c69c914c0282d658f1b7e8cd4fec775f8513dfe528"} Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.399130 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fhrp6" podStartSLOduration=4.399116983 podStartE2EDuration="4.399116983s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:44.388645469 +0000 UTC m=+1111.505698654" watchObservedRunningTime="2026-01-03 04:34:44.399116983 +0000 UTC m=+1111.516170168" Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.401317 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6rk4f" event={"ID":"b34876e9-7919-413e-83b3-6ee30560e822","Type":"ContainerStarted","Data":"ae3e9565dd7f755e5cc02a7f2e55ca30f9fe8c91d4e7ec5bd169e56b3a8005ca"} Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.431560 4865 generic.go:334] "Generic (PLEG): container finished" podID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerID="4b3319169d8de7eafdf73a57254319b68fe242ade82e4595fb24857b16835207" exitCode=0 Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.431618 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" event={"ID":"bef3c2c2-d5de-4735-a7bd-dad385be255c","Type":"ContainerDied","Data":"4b3319169d8de7eafdf73a57254319b68fe242ade82e4595fb24857b16835207"} Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.445468 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6bf49cd5-zkq6g" event={"ID":"b49d6e68-f987-49eb-ad04-41d4ba37798e","Type":"ContainerStarted","Data":"4d64325a150bc3e947d07601fb22951dbe3b8a8651670dbccf41e885bd8667ef"} Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.450511 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6rk4f" podStartSLOduration=4.450500745 podStartE2EDuration="4.450500745s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:44.446034584 +0000 UTC m=+1111.563087769" watchObservedRunningTime="2026-01-03 04:34:44.450500745 +0000 UTC m=+1111.567553930" Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.845518 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.931834 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56v5c\" (UniqueName: \"kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.932879 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.932909 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.933002 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.933038 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.933096 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0\") pod \"865da7b4-c089-4081-a3bc-51f245592dbb\" (UID: \"865da7b4-c089-4081-a3bc-51f245592dbb\") " Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.940716 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c" (OuterVolumeSpecName: "kube-api-access-56v5c") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "kube-api-access-56v5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.969469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:44 crc kubenswrapper[4865]: I0103 04:34:44.979216 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.001773 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config" (OuterVolumeSpecName: "config") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.002056 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.012811 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "865da7b4-c089-4081-a3bc-51f245592dbb" (UID: "865da7b4-c089-4081-a3bc-51f245592dbb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035353 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035399 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035410 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035419 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035428 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865da7b4-c089-4081-a3bc-51f245592dbb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.035439 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56v5c\" (UniqueName: \"kubernetes.io/projected/865da7b4-c089-4081-a3bc-51f245592dbb-kube-api-access-56v5c\") on node \"crc\" DevicePath \"\"" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.470848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" event={"ID":"bef3c2c2-d5de-4735-a7bd-dad385be255c","Type":"ContainerStarted","Data":"49f14e0b7d744e734a88eee70f0ba7f4c0ac3f2a3cd4da8e1f4eb38797ff0645"} Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.471312 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.491297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" event={"ID":"865da7b4-c089-4081-a3bc-51f245592dbb","Type":"ContainerDied","Data":"3646ac7b7ae5e3ee3b1bb8c8dfff9504cc55f730718abc00e6af9c732a5eb342"} Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.491365 4865 scope.go:117] "RemoveContainer" containerID="458ac61e8543f5c0662811c69c914c0282d658f1b7e8cd4fec775f8513dfe528" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.491553 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-sfhx6" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.495981 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" podStartSLOduration=5.495961277 podStartE2EDuration="5.495961277s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:34:45.491273509 +0000 UTC m=+1112.608326694" watchObservedRunningTime="2026-01-03 04:34:45.495961277 +0000 UTC m=+1112.613014462" Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.558036 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:45 crc kubenswrapper[4865]: I0103 04:34:45.567583 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-sfhx6"] Jan 03 04:34:47 crc kubenswrapper[4865]: I0103 04:34:47.166115 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865da7b4-c089-4081-a3bc-51f245592dbb" path="/var/lib/kubelet/pods/865da7b4-c089-4081-a3bc-51f245592dbb/volumes" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.719138 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.747323 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:34:49 crc kubenswrapper[4865]: E0103 04:34:49.747831 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865da7b4-c089-4081-a3bc-51f245592dbb" containerName="init" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.747852 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="865da7b4-c089-4081-a3bc-51f245592dbb" containerName="init" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.748170 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="865da7b4-c089-4081-a3bc-51f245592dbb" containerName="init" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.749244 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.755595 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.758040 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.823118 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.856189 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c8ff89456-njqfs"] Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.857642 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.886239 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c8ff89456-njqfs"] Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.941700 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56zj\" (UniqueName: \"kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.941796 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.941844 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.941879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.941899 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.942002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:49 crc kubenswrapper[4865]: I0103 04:34:49.942035 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.043808 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.043942 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-scripts\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044013 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-combined-ca-bundle\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-tls-certs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044126 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044146 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-secret-key\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044223 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2qm\" (UniqueName: \"kubernetes.io/projected/565fcd3f-e73a-446a-b862-717cfb106bd1-kube-api-access-wd2qm\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044258 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044319 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565fcd3f-e73a-446a-b862-717cfb106bd1-logs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-config-data\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.044602 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56zj\" (UniqueName: \"kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.047485 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.048841 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.055022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.059071 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.059106 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.059125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.061270 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56zj\" (UniqueName: \"kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj\") pod \"horizon-6cc9469fc6-wdk7w\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.079324 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-scripts\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-combined-ca-bundle\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-tls-certs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-secret-key\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146927 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2qm\" (UniqueName: \"kubernetes.io/projected/565fcd3f-e73a-446a-b862-717cfb106bd1-kube-api-access-wd2qm\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565fcd3f-e73a-446a-b862-717cfb106bd1-logs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.146983 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-config-data\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.148420 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-config-data\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.150095 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565fcd3f-e73a-446a-b862-717cfb106bd1-scripts\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.151224 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565fcd3f-e73a-446a-b862-717cfb106bd1-logs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.153655 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-secret-key\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.154460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-combined-ca-bundle\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.154523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/565fcd3f-e73a-446a-b862-717cfb106bd1-horizon-tls-certs\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.168424 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2qm\" (UniqueName: \"kubernetes.io/projected/565fcd3f-e73a-446a-b862-717cfb106bd1-kube-api-access-wd2qm\") pod \"horizon-5c8ff89456-njqfs\" (UID: \"565fcd3f-e73a-446a-b862-717cfb106bd1\") " pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:50 crc kubenswrapper[4865]: I0103 04:34:50.184158 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:34:51 crc kubenswrapper[4865]: I0103 04:34:51.465570 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:34:51 crc kubenswrapper[4865]: I0103 04:34:51.556253 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:34:51 crc kubenswrapper[4865]: I0103 04:34:51.557789 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" containerID="cri-o://38e6418945aded7a94fafbf95afb6897f5fe03fb61dce7cfa754b469b53ea65b" gracePeriod=10 Jan 03 04:34:52 crc kubenswrapper[4865]: I0103 04:34:52.576978 4865 generic.go:334] "Generic (PLEG): container finished" podID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerID="38e6418945aded7a94fafbf95afb6897f5fe03fb61dce7cfa754b469b53ea65b" exitCode=0 Jan 03 04:34:52 crc kubenswrapper[4865]: I0103 04:34:52.577021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" event={"ID":"08401bb5-3c30-45b4-bbf1-f963080642ce","Type":"ContainerDied","Data":"38e6418945aded7a94fafbf95afb6897f5fe03fb61dce7cfa754b469b53ea65b"} Jan 03 04:34:55 crc kubenswrapper[4865]: I0103 04:34:55.611287 4865 generic.go:334] "Generic (PLEG): container finished" podID="b34876e9-7919-413e-83b3-6ee30560e822" containerID="ae3e9565dd7f755e5cc02a7f2e55ca30f9fe8c91d4e7ec5bd169e56b3a8005ca" exitCode=0 Jan 03 04:34:55 crc kubenswrapper[4865]: I0103 04:34:55.611412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6rk4f" event={"ID":"b34876e9-7919-413e-83b3-6ee30560e822","Type":"ContainerDied","Data":"ae3e9565dd7f755e5cc02a7f2e55ca30f9fe8c91d4e7ec5bd169e56b3a8005ca"} Jan 03 04:34:55 crc kubenswrapper[4865]: I0103 04:34:55.657853 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 03 04:34:58 crc kubenswrapper[4865]: E0103 04:34:58.036697 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 03 04:34:58 crc kubenswrapper[4865]: E0103 04:34:58.039680 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65ch69hb4h66ch55bh58bh559h5c8h567h5c7h65chf5h6dh5dch9h54dh644h585h57h98h5h65hf5hfch556h594h56chfch5dh5d9hf8h64cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fxjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b6bf49cd5-zkq6g_openstack(b49d6e68-f987-49eb-ad04-41d4ba37798e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:34:58 crc kubenswrapper[4865]: E0103 04:34:58.044299 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b6bf49cd5-zkq6g" podUID="b49d6e68-f987-49eb-ad04-41d4ba37798e" Jan 03 04:35:00 crc kubenswrapper[4865]: I0103 04:35:00.658659 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 03 04:35:05 crc kubenswrapper[4865]: I0103 04:35:05.658599 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Jan 03 04:35:05 crc kubenswrapper[4865]: I0103 04:35:05.659363 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:35:06 crc kubenswrapper[4865]: E0103 04:35:06.553078 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 03 04:35:06 crc kubenswrapper[4865]: E0103 04:35:06.553787 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd7h8dhcbh574hfh5c6h675h65h87h79hffh66fh676h5d5h9dh68dhbbh678h5dch89h5c9h5f8h695h7bh57fh84h678h66h64h86hf9h5c7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4jbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d8d4694c7-2sb2k_openstack(d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:35:06 crc kubenswrapper[4865]: E0103 04:35:06.558030 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6d8d4694c7-2sb2k" podUID="d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" Jan 03 04:35:09 crc kubenswrapper[4865]: E0103 04:35:09.586986 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 03 04:35:09 crc kubenswrapper[4865]: E0103 04:35:09.587773 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ll974_openstack(394d36aa-4f2f-4f5f-a904-1fb372f2de27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:35:09 crc kubenswrapper[4865]: E0103 04:35:09.589232 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ll974" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" Jan 03 04:35:09 crc kubenswrapper[4865]: E0103 04:35:09.754358 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-ll974" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.079158 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.079753 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzhwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-m8g98_openstack(d4902a49-2ac7-4172-9f70-b4b14dfb7d67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.081545 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-m8g98" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.100517 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.100731 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9hd4h67dh5c8h644h698h5f5h686h58h54chf4h5f5h688hb8h65fhf4h57bh678hb9h9bh5d7h659h665h68dh565h556h9dh64chf7h64h687hcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6487489875-pjsvm_openstack(831a0414-a201-4718-8f29-453a55b575b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.103246 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6487489875-pjsvm" podUID="831a0414-a201-4718-8f29-453a55b575b0" Jan 03 04:35:11 crc kubenswrapper[4865]: I0103 04:35:11.777213 4865 generic.go:334] "Generic (PLEG): container finished" podID="89004a40-1d1d-46ec-a342-a067fb1eaa54" containerID="8b2928d909656017cdf32ea61094feb8f81b6a8956aa5d9a6b86ad93fbba6522" exitCode=0 Jan 03 04:35:11 crc kubenswrapper[4865]: I0103 04:35:11.777361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gdpqx" event={"ID":"89004a40-1d1d-46ec-a342-a067fb1eaa54","Type":"ContainerDied","Data":"8b2928d909656017cdf32ea61094feb8f81b6a8956aa5d9a6b86ad93fbba6522"} Jan 03 04:35:11 crc kubenswrapper[4865]: E0103 04:35:11.778975 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-m8g98" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.124345 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.131946 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.146139 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.148418 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.159340 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295311 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs\") pod \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295370 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs\") pod \"831a0414-a201-4718-8f29-453a55b575b0\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295497 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs\") pod \"b49d6e68-f987-49eb-ad04-41d4ba37798e\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295665 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key\") pod \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fxjh\" (UniqueName: \"kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh\") pod \"b49d6e68-f987-49eb-ad04-41d4ba37798e\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295738 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295745 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs" (OuterVolumeSpecName: "logs") pod "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" (UID: "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295746 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs" (OuterVolumeSpecName: "logs") pod "831a0414-a201-4718-8f29-453a55b575b0" (UID: "831a0414-a201-4718-8f29-453a55b575b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295799 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn4v6\" (UniqueName: \"kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295862 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295897 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295938 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts\") pod \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295965 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xlsg\" (UniqueName: \"kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.295961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs" (OuterVolumeSpecName: "logs") pod "b49d6e68-f987-49eb-ad04-41d4ba37798e" (UID: "b49d6e68-f987-49eb-ad04-41d4ba37798e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296004 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key\") pod \"831a0414-a201-4718-8f29-453a55b575b0\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296046 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data\") pod \"831a0414-a201-4718-8f29-453a55b575b0\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296074 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key\") pod \"b49d6e68-f987-49eb-ad04-41d4ba37798e\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296103 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data\") pod \"b49d6e68-f987-49eb-ad04-41d4ba37798e\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296139 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data\") pod \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296166 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb\") pod \"08401bb5-3c30-45b4-bbf1-f963080642ce\" (UID: \"08401bb5-3c30-45b4-bbf1-f963080642ce\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296205 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296238 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys\") pod \"b34876e9-7919-413e-83b3-6ee30560e822\" (UID: \"b34876e9-7919-413e-83b3-6ee30560e822\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts\") pod \"831a0414-a201-4718-8f29-453a55b575b0\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jbm\" (UniqueName: \"kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm\") pod \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\" (UID: \"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296924 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7t4\" (UniqueName: \"kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4\") pod \"831a0414-a201-4718-8f29-453a55b575b0\" (UID: \"831a0414-a201-4718-8f29-453a55b575b0\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.296961 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts\") pod \"b49d6e68-f987-49eb-ad04-41d4ba37798e\" (UID: \"b49d6e68-f987-49eb-ad04-41d4ba37798e\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.298170 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.298199 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/831a0414-a201-4718-8f29-453a55b575b0-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.298219 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49d6e68-f987-49eb-ad04-41d4ba37798e-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.299018 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts" (OuterVolumeSpecName: "scripts") pod "831a0414-a201-4718-8f29-453a55b575b0" (UID: "831a0414-a201-4718-8f29-453a55b575b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.299131 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data" (OuterVolumeSpecName: "config-data") pod "b49d6e68-f987-49eb-ad04-41d4ba37798e" (UID: "b49d6e68-f987-49eb-ad04-41d4ba37798e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.299760 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts" (OuterVolumeSpecName: "scripts") pod "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" (UID: "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.300422 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data" (OuterVolumeSpecName: "config-data") pod "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" (UID: "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.301701 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts" (OuterVolumeSpecName: "scripts") pod "b49d6e68-f987-49eb-ad04-41d4ba37798e" (UID: "b49d6e68-f987-49eb-ad04-41d4ba37798e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.302341 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data" (OuterVolumeSpecName: "config-data") pod "831a0414-a201-4718-8f29-453a55b575b0" (UID: "831a0414-a201-4718-8f29-453a55b575b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.303683 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6" (OuterVolumeSpecName: "kube-api-access-wn4v6") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "kube-api-access-wn4v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.303773 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" (UID: "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.304748 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b49d6e68-f987-49eb-ad04-41d4ba37798e" (UID: "b49d6e68-f987-49eb-ad04-41d4ba37798e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.306279 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.307627 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts" (OuterVolumeSpecName: "scripts") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.307697 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4" (OuterVolumeSpecName: "kube-api-access-2r7t4") pod "831a0414-a201-4718-8f29-453a55b575b0" (UID: "831a0414-a201-4718-8f29-453a55b575b0"). InnerVolumeSpecName "kube-api-access-2r7t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.307752 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "831a0414-a201-4718-8f29-453a55b575b0" (UID: "831a0414-a201-4718-8f29-453a55b575b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.308030 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.308703 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh" (OuterVolumeSpecName: "kube-api-access-6fxjh") pod "b49d6e68-f987-49eb-ad04-41d4ba37798e" (UID: "b49d6e68-f987-49eb-ad04-41d4ba37798e"). InnerVolumeSpecName "kube-api-access-6fxjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.318133 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg" (OuterVolumeSpecName: "kube-api-access-4xlsg") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "kube-api-access-4xlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.323488 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm" (OuterVolumeSpecName: "kube-api-access-d4jbm") pod "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" (UID: "d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a"). InnerVolumeSpecName "kube-api-access-d4jbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.331236 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.341373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data" (OuterVolumeSpecName: "config-data") pod "b34876e9-7919-413e-83b3-6ee30560e822" (UID: "b34876e9-7919-413e-83b3-6ee30560e822"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.345198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.347738 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.361562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.364525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config" (OuterVolumeSpecName: "config") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.374330 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08401bb5-3c30-45b4-bbf1-f963080642ce" (UID: "08401bb5-3c30-45b4-bbf1-f963080642ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399445 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399481 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399494 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399508 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399522 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399533 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399544 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399557 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jbm\" (UniqueName: \"kubernetes.io/projected/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-kube-api-access-d4jbm\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399569 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r7t4\" (UniqueName: \"kubernetes.io/projected/831a0414-a201-4718-8f29-453a55b575b0-kube-api-access-2r7t4\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399580 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49d6e68-f987-49eb-ad04-41d4ba37798e-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399591 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399604 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399618 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fxjh\" (UniqueName: \"kubernetes.io/projected/b49d6e68-f987-49eb-ad04-41d4ba37798e-kube-api-access-6fxjh\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399630 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399641 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399652 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08401bb5-3c30-45b4-bbf1-f963080642ce-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399663 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn4v6\" (UniqueName: \"kubernetes.io/projected/b34876e9-7919-413e-83b3-6ee30560e822-kube-api-access-wn4v6\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399674 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399687 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b34876e9-7919-413e-83b3-6ee30560e822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399700 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399712 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xlsg\" (UniqueName: \"kubernetes.io/projected/08401bb5-3c30-45b4-bbf1-f963080642ce-kube-api-access-4xlsg\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399723 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/831a0414-a201-4718-8f29-453a55b575b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399733 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/831a0414-a201-4718-8f29-453a55b575b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.399745 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49d6e68-f987-49eb-ad04-41d4ba37798e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: E0103 04:35:13.628011 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 03 04:35:13 crc kubenswrapper[4865]: E0103 04:35:13.628337 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2f9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6rcv2_openstack(d16da42a-8750-476c-abdf-8054eca2694a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 04:35:13 crc kubenswrapper[4865]: E0103 04:35:13.629804 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6rcv2" podUID="d16da42a-8750-476c-abdf-8054eca2694a" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.681984 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gdpqx" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.704478 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data\") pod \"89004a40-1d1d-46ec-a342-a067fb1eaa54\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.704588 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle\") pod \"89004a40-1d1d-46ec-a342-a067fb1eaa54\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.704762 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data\") pod \"89004a40-1d1d-46ec-a342-a067fb1eaa54\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.704854 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjtm\" (UniqueName: \"kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm\") pod \"89004a40-1d1d-46ec-a342-a067fb1eaa54\" (UID: \"89004a40-1d1d-46ec-a342-a067fb1eaa54\") " Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.712457 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm" (OuterVolumeSpecName: "kube-api-access-kcjtm") pod "89004a40-1d1d-46ec-a342-a067fb1eaa54" (UID: "89004a40-1d1d-46ec-a342-a067fb1eaa54"). InnerVolumeSpecName "kube-api-access-kcjtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.714884 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "89004a40-1d1d-46ec-a342-a067fb1eaa54" (UID: "89004a40-1d1d-46ec-a342-a067fb1eaa54"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.761795 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89004a40-1d1d-46ec-a342-a067fb1eaa54" (UID: "89004a40-1d1d-46ec-a342-a067fb1eaa54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.766549 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data" (OuterVolumeSpecName: "config-data") pod "89004a40-1d1d-46ec-a342-a067fb1eaa54" (UID: "89004a40-1d1d-46ec-a342-a067fb1eaa54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.799913 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487489875-pjsvm" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.799943 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487489875-pjsvm" event={"ID":"831a0414-a201-4718-8f29-453a55b575b0","Type":"ContainerDied","Data":"c08dd97a0a0d635577bde9915bc7127f470cdcfb95c42fdc6d688401eee7be00"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807210 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807238 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjtm\" (UniqueName: \"kubernetes.io/projected/89004a40-1d1d-46ec-a342-a067fb1eaa54-kube-api-access-kcjtm\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807251 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807264 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89004a40-1d1d-46ec-a342-a067fb1eaa54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807536 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6rk4f" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807579 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6rk4f" event={"ID":"b34876e9-7919-413e-83b3-6ee30560e822","Type":"ContainerDied","Data":"dd826edf36b3bc4c9a2c9d2e5e77d807a7773ea3fcac7f9fa8952ddf9e550c4e"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.807640 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd826edf36b3bc4c9a2c9d2e5e77d807a7773ea3fcac7f9fa8952ddf9e550c4e" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.810021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8d4694c7-2sb2k" event={"ID":"d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a","Type":"ContainerDied","Data":"ef2ec4b6ec903180d6336768df4aea4d6dba6d6329175e31026a0a9ac4c232cc"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.810092 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8d4694c7-2sb2k" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.819852 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" event={"ID":"08401bb5-3c30-45b4-bbf1-f963080642ce","Type":"ContainerDied","Data":"80f07919a089b6d1532990c2784fa5f68b04f38001ab7474d71a4a323f79586e"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.819899 4865 scope.go:117] "RemoveContainer" containerID="38e6418945aded7a94fafbf95afb6897f5fe03fb61dce7cfa754b469b53ea65b" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.819920 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.827917 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b6bf49cd5-zkq6g" event={"ID":"b49d6e68-f987-49eb-ad04-41d4ba37798e","Type":"ContainerDied","Data":"4d64325a150bc3e947d07601fb22951dbe3b8a8651670dbccf41e885bd8667ef"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.827932 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b6bf49cd5-zkq6g" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.836792 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gdpqx" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.837794 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gdpqx" event={"ID":"89004a40-1d1d-46ec-a342-a067fb1eaa54","Type":"ContainerDied","Data":"9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855"} Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.837826 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9646ba50aec3cae9f2c35f0f000e008acc5a4ab0e58296bf126698d2b1b29855" Jan 03 04:35:13 crc kubenswrapper[4865]: E0103 04:35:13.860407 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6rcv2" podUID="d16da42a-8750-476c-abdf-8054eca2694a" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.860558 4865 scope.go:117] "RemoveContainer" containerID="da0cb86d4a10e2a10491a3d9fa328eb2da58fa88b5858471a84e571a9a4efc8f" Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.894214 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.922906 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6487489875-pjsvm"] Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.988678 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:35:13 crc kubenswrapper[4865]: I0103 04:35:13.994913 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d8d4694c7-2sb2k"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.000601 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.006098 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-5sdb5"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.018217 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.023945 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b6bf49cd5-zkq6g"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.105663 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.136483 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c8ff89456-njqfs"] Jan 03 04:35:14 crc kubenswrapper[4865]: W0103 04:35:14.145439 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565fcd3f_e73a_446a_b862_717cfb106bd1.slice/crio-b7011320494eaa45b6b1d03269bfab7d918994803a352cfa2825b4e280f70519 WatchSource:0}: Error finding container b7011320494eaa45b6b1d03269bfab7d918994803a352cfa2825b4e280f70519: Status 404 returned error can't find the container with id b7011320494eaa45b6b1d03269bfab7d918994803a352cfa2825b4e280f70519 Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.310208 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6rk4f"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.316008 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6rk4f"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.357938 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lqmx2"] Jan 03 04:35:14 crc kubenswrapper[4865]: E0103 04:35:14.358233 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.358251 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" Jan 03 04:35:14 crc kubenswrapper[4865]: E0103 04:35:14.358273 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34876e9-7919-413e-83b3-6ee30560e822" containerName="keystone-bootstrap" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.358279 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34876e9-7919-413e-83b3-6ee30560e822" containerName="keystone-bootstrap" Jan 03 04:35:14 crc kubenswrapper[4865]: E0103 04:35:14.358291 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" containerName="glance-db-sync" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.358297 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" containerName="glance-db-sync" Jan 03 04:35:14 crc kubenswrapper[4865]: E0103 04:35:14.358308 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="init" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.358313 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="init" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.361322 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" containerName="glance-db-sync" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.361344 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.361356 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34876e9-7919-413e-83b3-6ee30560e822" containerName="keystone-bootstrap" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.361878 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.364368 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.364841 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.365207 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.368123 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-82kzt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.370302 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.370844 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqmx2"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.402427 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.406481 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.429120 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437870 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437907 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437964 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.437985 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438029 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438055 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gwg\" (UniqueName: \"kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438081 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6h74\" (UniqueName: \"kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438097 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.438122 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.538879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gwg\" (UniqueName: \"kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539154 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6h74\" (UniqueName: \"kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539239 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539262 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539286 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539312 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539354 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.539416 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.540132 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.540337 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.540627 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.541064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.541553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.550154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.551210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.551683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.552948 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.555083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.564002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6h74\" (UniqueName: \"kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74\") pod \"dnsmasq-dns-8b5c85b87-sp7qt\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.576468 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gwg\" (UniqueName: \"kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg\") pod \"keystone-bootstrap-lqmx2\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.677316 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:14 crc kubenswrapper[4865]: I0103 04:35:14.723814 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:14.846743 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerStarted","Data":"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:14.848861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c8ff89456-njqfs" event={"ID":"565fcd3f-e73a-446a-b862-717cfb106bd1","Type":"ContainerStarted","Data":"b7011320494eaa45b6b1d03269bfab7d918994803a352cfa2825b4e280f70519"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:14.851048 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerStarted","Data":"35674883e25d50aaaa16d502603e87c61c88a491365c770f299e6c23495f4259"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.173540 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" path="/var/lib/kubelet/pods/08401bb5-3c30-45b4-bbf1-f963080642ce/volumes" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.175039 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831a0414-a201-4718-8f29-453a55b575b0" path="/var/lib/kubelet/pods/831a0414-a201-4718-8f29-453a55b575b0/volumes" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.175772 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34876e9-7919-413e-83b3-6ee30560e822" path="/var/lib/kubelet/pods/b34876e9-7919-413e-83b3-6ee30560e822/volumes" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.177799 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49d6e68-f987-49eb-ad04-41d4ba37798e" path="/var/lib/kubelet/pods/b49d6e68-f987-49eb-ad04-41d4ba37798e/volumes" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.178686 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a" path="/var/lib/kubelet/pods/d40bb65d-6cf3-4133-9fd6-1aa59f68ba2a/volumes" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.267284 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.269883 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.272458 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fg978" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.272524 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.272732 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.278228 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455272 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455373 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnld\" (UniqueName: \"kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455661 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.455697 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.525851 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.527987 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.531648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.546666 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.558950 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559025 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559136 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnld\" (UniqueName: \"kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.559216 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.561426 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.561977 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.566889 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.567445 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.568344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.577667 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.592427 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnld\" (UniqueName: \"kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.612119 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.657750 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-5sdb5" podUID="08401bb5-3c30-45b4-bbf1-f963080642ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661283 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661317 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661475 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslmz\" (UniqueName: \"kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.661528 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765691 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765814 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765917 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765952 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslmz\" (UniqueName: \"kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.765992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.766098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.766097 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.766821 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.766930 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.770680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.771046 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.776248 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.793807 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.794804 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslmz\" (UniqueName: \"kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz\") pod \"glance-default-internal-api-0\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.869604 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:15.891134 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:16.592116 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:16.720703 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.740873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:19 crc kubenswrapper[4865]: W0103 04:35:19.743314 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a739bd_909b_42c4_83a4_6003ebd5e9a6.slice/crio-1237454bb25bfac7b170f750f92cf43eef35b75ab649df73f578ef5b868cf8a2 WatchSource:0}: Error finding container 1237454bb25bfac7b170f750f92cf43eef35b75ab649df73f578ef5b868cf8a2: Status 404 returned error can't find the container with id 1237454bb25bfac7b170f750f92cf43eef35b75ab649df73f578ef5b868cf8a2 Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.750899 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqmx2"] Jan 03 04:35:19 crc kubenswrapper[4865]: W0103 04:35:19.756373 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7d01ded_38d8_4af2_96ef_ba4a3f290f9c.slice/crio-0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef WatchSource:0}: Error finding container 0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef: Status 404 returned error can't find the container with id 0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.821874 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.906660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c8ff89456-njqfs" event={"ID":"565fcd3f-e73a-446a-b862-717cfb106bd1","Type":"ContainerStarted","Data":"c9fe6b49912eba3c04c510171a910539ff582a0d66510df02e83f993550ddeb0"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.906716 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c8ff89456-njqfs" event={"ID":"565fcd3f-e73a-446a-b862-717cfb106bd1","Type":"ContainerStarted","Data":"6c06c67c5dcb5cf5a54d804efba744432650290f16cb6454f161914e72c4f510"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.911224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqmx2" event={"ID":"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c","Type":"ContainerStarted","Data":"0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.912853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" event={"ID":"60a739bd-909b-42c4-83a4-6003ebd5e9a6","Type":"ContainerStarted","Data":"1237454bb25bfac7b170f750f92cf43eef35b75ab649df73f578ef5b868cf8a2"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.914491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerStarted","Data":"e8c51f3df0fa17191585fbd98de1dd756c10077859e51459e1c8c79c4ab744ad"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.914533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerStarted","Data":"90c28fa2ba55e8b0146141174ba280b6b86aa0c44d4d082839188cf483036d50"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.916756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerStarted","Data":"4092a65a009925ac37081df4b855b027ef8ca5f62ee9eecc0b39ad537d29c556"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.920211 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerStarted","Data":"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a"} Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.930462 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c8ff89456-njqfs" podStartSLOduration=26.212581409 podStartE2EDuration="30.930441874s" podCreationTimestamp="2026-01-03 04:34:49 +0000 UTC" firstStartedPulling="2026-01-03 04:35:14.148618056 +0000 UTC m=+1141.265671241" lastFinishedPulling="2026-01-03 04:35:18.866478531 +0000 UTC m=+1145.983531706" observedRunningTime="2026-01-03 04:35:19.924093023 +0000 UTC m=+1147.041146228" watchObservedRunningTime="2026-01-03 04:35:19.930441874 +0000 UTC m=+1147.047495079" Jan 03 04:35:19 crc kubenswrapper[4865]: I0103 04:35:19.956557 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cc9469fc6-wdk7w" podStartSLOduration=26.154932977 podStartE2EDuration="30.956535162s" podCreationTimestamp="2026-01-03 04:34:49 +0000 UTC" firstStartedPulling="2026-01-03 04:35:14.101590961 +0000 UTC m=+1141.218644146" lastFinishedPulling="2026-01-03 04:35:18.903193156 +0000 UTC m=+1146.020246331" observedRunningTime="2026-01-03 04:35:19.950712524 +0000 UTC m=+1147.067765709" watchObservedRunningTime="2026-01-03 04:35:19.956535162 +0000 UTC m=+1147.073588357" Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.079876 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.079925 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.185685 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.186013 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.810587 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:20 crc kubenswrapper[4865]: W0103 04:35:20.814319 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7336705c_df3f_4630_8897_d1ab023d13e6.slice/crio-ff338bd53bd7b3b19e573eb897c0c802b532a2853e657682a8e87010a59891ab WatchSource:0}: Error finding container ff338bd53bd7b3b19e573eb897c0c802b532a2853e657682a8e87010a59891ab: Status 404 returned error can't find the container with id ff338bd53bd7b3b19e573eb897c0c802b532a2853e657682a8e87010a59891ab Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.933965 4865 generic.go:334] "Generic (PLEG): container finished" podID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerID="d05bf9a32717fd6835cf6faa4f1ee079cfae0a97c0162646eda26516db11571a" exitCode=0 Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.934018 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" event={"ID":"60a739bd-909b-42c4-83a4-6003ebd5e9a6","Type":"ContainerDied","Data":"d05bf9a32717fd6835cf6faa4f1ee079cfae0a97c0162646eda26516db11571a"} Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.938946 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerStarted","Data":"7513d27a95a0025b11b285d26c311e4a1975af243cc78aaea482d24784b17ae8"} Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.944653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerStarted","Data":"ff338bd53bd7b3b19e573eb897c0c802b532a2853e657682a8e87010a59891ab"} Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.952214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqmx2" event={"ID":"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c","Type":"ContainerStarted","Data":"4e6c25341a62a4b9b1c6b6570b8897690dc69b8763ae425de8a5b1df557f031a"} Jan 03 04:35:20 crc kubenswrapper[4865]: I0103 04:35:20.990219 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lqmx2" podStartSLOduration=6.990198673 podStartE2EDuration="6.990198673s" podCreationTimestamp="2026-01-03 04:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:20.985675231 +0000 UTC m=+1148.102728416" watchObservedRunningTime="2026-01-03 04:35:20.990198673 +0000 UTC m=+1148.107251858" Jan 03 04:35:21 crc kubenswrapper[4865]: I0103 04:35:21.975783 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" event={"ID":"60a739bd-909b-42c4-83a4-6003ebd5e9a6","Type":"ContainerStarted","Data":"2ec2dd79dfd49370533ce9cb19388f56b241f86846a8fbfdc6c63274c5f97ab4"} Jan 03 04:35:21 crc kubenswrapper[4865]: I0103 04:35:21.979922 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerStarted","Data":"cda05e5cb542fc3d72d51534f3ae1720215429a902665d685257ff2cbb1191a4"} Jan 03 04:35:21 crc kubenswrapper[4865]: I0103 04:35:21.980019 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-log" containerID="cri-o://7513d27a95a0025b11b285d26c311e4a1975af243cc78aaea482d24784b17ae8" gracePeriod=30 Jan 03 04:35:21 crc kubenswrapper[4865]: I0103 04:35:21.980470 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-httpd" containerID="cri-o://cda05e5cb542fc3d72d51534f3ae1720215429a902665d685257ff2cbb1191a4" gracePeriod=30 Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.026231 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.026212119 podStartE2EDuration="8.026212119s" podCreationTimestamp="2026-01-03 04:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:22.017829322 +0000 UTC m=+1149.134882507" watchObservedRunningTime="2026-01-03 04:35:22.026212119 +0000 UTC m=+1149.143265304" Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.988581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerStarted","Data":"eaea71fadcc53fb59c07f6328f96ab42e253fc28ba5fc37f78a0a7d2bc6d4cc1"} Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.990565 4865 generic.go:334] "Generic (PLEG): container finished" podID="b28ae269-05db-4412-b02a-a44bdda93493" containerID="cda05e5cb542fc3d72d51534f3ae1720215429a902665d685257ff2cbb1191a4" exitCode=0 Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.990600 4865 generic.go:334] "Generic (PLEG): container finished" podID="b28ae269-05db-4412-b02a-a44bdda93493" containerID="7513d27a95a0025b11b285d26c311e4a1975af243cc78aaea482d24784b17ae8" exitCode=143 Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.990614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerDied","Data":"cda05e5cb542fc3d72d51534f3ae1720215429a902665d685257ff2cbb1191a4"} Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.990649 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerDied","Data":"7513d27a95a0025b11b285d26c311e4a1975af243cc78aaea482d24784b17ae8"} Jan 03 04:35:22 crc kubenswrapper[4865]: I0103 04:35:22.990775 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:23 crc kubenswrapper[4865]: I0103 04:35:23.031718 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" podStartSLOduration=9.031696667 podStartE2EDuration="9.031696667s" podCreationTimestamp="2026-01-03 04:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:23.007070451 +0000 UTC m=+1150.124123636" watchObservedRunningTime="2026-01-03 04:35:23.031696667 +0000 UTC m=+1150.148749862" Jan 03 04:35:23 crc kubenswrapper[4865]: I0103 04:35:23.997756 4865 generic.go:334] "Generic (PLEG): container finished" podID="c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" containerID="4e6c25341a62a4b9b1c6b6570b8897690dc69b8763ae425de8a5b1df557f031a" exitCode=0 Jan 03 04:35:23 crc kubenswrapper[4865]: I0103 04:35:23.998860 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqmx2" event={"ID":"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c","Type":"ContainerDied","Data":"4e6c25341a62a4b9b1c6b6570b8897690dc69b8763ae425de8a5b1df557f031a"} Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.859735 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.861827 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.998869 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42gwg\" (UniqueName: \"kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.998926 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.998948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.998994 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999015 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999052 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fslmz\" (UniqueName: \"kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999071 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999113 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999137 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999197 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts\") pod \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\" (UID: \"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999294 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run\") pod \"b28ae269-05db-4412-b02a-a44bdda93493\" (UID: \"b28ae269-05db-4412-b02a-a44bdda93493\") " Jan 03 04:35:25 crc kubenswrapper[4865]: I0103 04:35:25.999840 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs" (OuterVolumeSpecName: "logs") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:25.999986 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.000273 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.005217 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.020541 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts" (OuterVolumeSpecName: "scripts") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.020624 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts" (OuterVolumeSpecName: "scripts") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.020658 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz" (OuterVolumeSpecName: "kube-api-access-fslmz") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "kube-api-access-fslmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.020715 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.020819 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg" (OuterVolumeSpecName: "kube-api-access-42gwg") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "kube-api-access-42gwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.023505 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.028882 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.029455 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data" (OuterVolumeSpecName: "config-data") pod "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" (UID: "c7d01ded-38d8-4af2-96ef-ba4a3f290f9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.039110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.041159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqmx2" event={"ID":"c7d01ded-38d8-4af2-96ef-ba4a3f290f9c","Type":"ContainerDied","Data":"0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef"} Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.041199 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0323722ed24855cd86c1392183a828908a2e6d4f6c92a329471b17f4f60854ef" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.041274 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqmx2" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.054628 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b28ae269-05db-4412-b02a-a44bdda93493","Type":"ContainerDied","Data":"4092a65a009925ac37081df4b855b027ef8ca5f62ee9eecc0b39ad537d29c556"} Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.054688 4865 scope.go:117] "RemoveContainer" containerID="cda05e5cb542fc3d72d51534f3ae1720215429a902665d685257ff2cbb1191a4" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.054855 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.074522 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data" (OuterVolumeSpecName: "config-data") pod "b28ae269-05db-4412-b02a-a44bdda93493" (UID: "b28ae269-05db-4412-b02a-a44bdda93493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101813 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b28ae269-05db-4412-b02a-a44bdda93493-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101858 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42gwg\" (UniqueName: \"kubernetes.io/projected/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-kube-api-access-42gwg\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101871 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101900 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101910 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101918 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101927 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fslmz\" (UniqueName: \"kubernetes.io/projected/b28ae269-05db-4412-b02a-a44bdda93493-kube-api-access-fslmz\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101987 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28ae269-05db-4412-b02a-a44bdda93493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.101997 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.102005 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.102014 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.102022 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.113976 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b8664f56d-q48t7"] Jan 03 04:35:26 crc kubenswrapper[4865]: E0103 04:35:26.114361 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-log" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114394 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-log" Jan 03 04:35:26 crc kubenswrapper[4865]: E0103 04:35:26.114411 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" containerName="keystone-bootstrap" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114418 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" containerName="keystone-bootstrap" Jan 03 04:35:26 crc kubenswrapper[4865]: E0103 04:35:26.114444 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-httpd" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114453 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-httpd" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114608 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-httpd" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114622 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28ae269-05db-4412-b02a-a44bdda93493" containerName="glance-log" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.114631 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" containerName="keystone-bootstrap" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.115145 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.119910 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.120064 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.120193 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.120300 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.120814 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.120547 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-82kzt" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.134039 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b8664f56d-q48t7"] Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.142806 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.203098 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304164 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-scripts\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304316 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-combined-ca-bundle\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-config-data\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304416 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-internal-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304483 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzzl\" (UniqueName: \"kubernetes.io/projected/d707525c-50ad-4b99-b59c-177bcae86c4c-kube-api-access-qxzzl\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304570 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-public-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-credential-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.304724 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-fernet-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.396074 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.403093 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.410907 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-config-data\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.411605 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-internal-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.411639 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzzl\" (UniqueName: \"kubernetes.io/projected/d707525c-50ad-4b99-b59c-177bcae86c4c-kube-api-access-qxzzl\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.411661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-public-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.411694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-credential-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.411950 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-fernet-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.412285 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-scripts\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.412538 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-combined-ca-bundle\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.421029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-config-data\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.421444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-credential-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.422348 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-scripts\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.423050 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.423818 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-combined-ca-bundle\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.424351 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.426966 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.427125 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.435342 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-fernet-keys\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.440886 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzzl\" (UniqueName: \"kubernetes.io/projected/d707525c-50ad-4b99-b59c-177bcae86c4c-kube-api-access-qxzzl\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.441591 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-public-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.442479 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.451554 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d707525c-50ad-4b99-b59c-177bcae86c4c-internal-tls-certs\") pod \"keystone-7b8664f56d-q48t7\" (UID: \"d707525c-50ad-4b99-b59c-177bcae86c4c\") " pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.468854 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616038 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616561 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616682 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbl5\" (UniqueName: \"kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616816 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.616837 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718555 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718611 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718658 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbl5\" (UniqueName: \"kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718753 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.718820 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.719829 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.720053 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.722833 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.723715 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.725558 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.726625 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.728227 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.741958 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbl5\" (UniqueName: \"kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.769690 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:35:26 crc kubenswrapper[4865]: I0103 04:35:26.795724 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.065871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerStarted","Data":"12192c405a55b4958eafd333e66c34cf2e3f8b1147e3a5439969af9e094ac667"} Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.066038 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-log" containerID="cri-o://eaea71fadcc53fb59c07f6328f96ab42e253fc28ba5fc37f78a0a7d2bc6d4cc1" gracePeriod=30 Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.067225 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-httpd" containerID="cri-o://12192c405a55b4958eafd333e66c34cf2e3f8b1147e3a5439969af9e094ac667" gracePeriod=30 Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.089873 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.089852372 podStartE2EDuration="13.089852372s" podCreationTimestamp="2026-01-03 04:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:27.087029926 +0000 UTC m=+1154.204083121" watchObservedRunningTime="2026-01-03 04:35:27.089852372 +0000 UTC m=+1154.206905557" Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.178527 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28ae269-05db-4412-b02a-a44bdda93493" path="/var/lib/kubelet/pods/b28ae269-05db-4412-b02a-a44bdda93493/volumes" Jan 03 04:35:27 crc kubenswrapper[4865]: I0103 04:35:27.901455 4865 scope.go:117] "RemoveContainer" containerID="7513d27a95a0025b11b285d26c311e4a1975af243cc78aaea482d24784b17ae8" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.091631 4865 generic.go:334] "Generic (PLEG): container finished" podID="7336705c-df3f-4630-8897-d1ab023d13e6" containerID="12192c405a55b4958eafd333e66c34cf2e3f8b1147e3a5439969af9e094ac667" exitCode=0 Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.091907 4865 generic.go:334] "Generic (PLEG): container finished" podID="7336705c-df3f-4630-8897-d1ab023d13e6" containerID="eaea71fadcc53fb59c07f6328f96ab42e253fc28ba5fc37f78a0a7d2bc6d4cc1" exitCode=143 Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.091676 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerDied","Data":"12192c405a55b4958eafd333e66c34cf2e3f8b1147e3a5439969af9e094ac667"} Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.091979 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerDied","Data":"eaea71fadcc53fb59c07f6328f96ab42e253fc28ba5fc37f78a0a7d2bc6d4cc1"} Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.098338 4865 generic.go:334] "Generic (PLEG): container finished" podID="49719dec-6060-4d9b-ad15-fbeac83d7ab1" containerID="a8151f80e3f0bfd14c0e8015fac2ad0a5ec57c380d318ca41181a991a4fe56bc" exitCode=0 Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.098412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fhrp6" event={"ID":"49719dec-6060-4d9b-ad15-fbeac83d7ab1","Type":"ContainerDied","Data":"a8151f80e3f0bfd14c0e8015fac2ad0a5ec57c380d318ca41181a991a4fe56bc"} Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.296473 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449228 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449268 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449303 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449352 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.449580 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfnld\" (UniqueName: \"kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld\") pod \"7336705c-df3f-4630-8897-d1ab023d13e6\" (UID: \"7336705c-df3f-4630-8897-d1ab023d13e6\") " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.451257 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.451816 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs" (OuterVolumeSpecName: "logs") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.455403 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.456827 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld" (OuterVolumeSpecName: "kube-api-access-zfnld") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "kube-api-access-zfnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.457210 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts" (OuterVolumeSpecName: "scripts") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.480160 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b8664f56d-q48t7"] Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.481427 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.517927 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data" (OuterVolumeSpecName: "config-data") pod "7336705c-df3f-4630-8897-d1ab023d13e6" (UID: "7336705c-df3f-4630-8897-d1ab023d13e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552099 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552128 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552141 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552174 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552186 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7336705c-df3f-4630-8897-d1ab023d13e6-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552198 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7336705c-df3f-4630-8897-d1ab023d13e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.552213 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfnld\" (UniqueName: \"kubernetes.io/projected/7336705c-df3f-4630-8897-d1ab023d13e6-kube-api-access-zfnld\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.574146 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.639522 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:35:28 crc kubenswrapper[4865]: W0103 04:35:28.648646 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd307a63_8983_4a93_9c7f_e961c5eb6620.slice/crio-9b05b59c31c25a9d9b55ecc1676bb384d41a35914f6f63b3b8456d2a97e125f6 WatchSource:0}: Error finding container 9b05b59c31c25a9d9b55ecc1676bb384d41a35914f6f63b3b8456d2a97e125f6: Status 404 returned error can't find the container with id 9b05b59c31c25a9d9b55ecc1676bb384d41a35914f6f63b3b8456d2a97e125f6 Jan 03 04:35:28 crc kubenswrapper[4865]: I0103 04:35:28.653346 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.110550 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerStarted","Data":"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.112606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m8g98" event={"ID":"d4902a49-2ac7-4172-9f70-b4b14dfb7d67","Type":"ContainerStarted","Data":"d05cc379d6f12cb9798baadfdd28c0404c0749045f583f31c7748e0c6f3fc4e4"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.115648 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6rcv2" event={"ID":"d16da42a-8750-476c-abdf-8054eca2694a","Type":"ContainerStarted","Data":"d163b9dd6c3d565976bc6cde2e6995b55277335a94fb0d5d3fb7e0adf7050300"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.116968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerStarted","Data":"9b05b59c31c25a9d9b55ecc1676bb384d41a35914f6f63b3b8456d2a97e125f6"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.118036 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ll974" event={"ID":"394d36aa-4f2f-4f5f-a904-1fb372f2de27","Type":"ContainerStarted","Data":"b0289d137493299d875eff99d5805eb8dce604ebf81f0de5dd83d65764fff223"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.120016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7336705c-df3f-4630-8897-d1ab023d13e6","Type":"ContainerDied","Data":"ff338bd53bd7b3b19e573eb897c0c802b532a2853e657682a8e87010a59891ab"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.120039 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.120062 4865 scope.go:117] "RemoveContainer" containerID="12192c405a55b4958eafd333e66c34cf2e3f8b1147e3a5439969af9e094ac667" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.124802 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b8664f56d-q48t7" event={"ID":"d707525c-50ad-4b99-b59c-177bcae86c4c","Type":"ContainerStarted","Data":"ba2946eb077404f2617c379664af9e6a7668daa3dad7a6740704fbe60587671b"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.124849 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b8664f56d-q48t7" event={"ID":"d707525c-50ad-4b99-b59c-177bcae86c4c","Type":"ContainerStarted","Data":"3680d181dc7debee747815db06ceb50d82b1fc58cff4074eb7c407ecb60c86ac"} Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.124955 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.130440 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m8g98" podStartSLOduration=3.610654064 podStartE2EDuration="49.130422142s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="2026-01-03 04:34:42.555781047 +0000 UTC m=+1109.672834232" lastFinishedPulling="2026-01-03 04:35:28.075549115 +0000 UTC m=+1155.192602310" observedRunningTime="2026-01-03 04:35:29.125064946 +0000 UTC m=+1156.242118131" watchObservedRunningTime="2026-01-03 04:35:29.130422142 +0000 UTC m=+1156.247475327" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.163623 4865 scope.go:117] "RemoveContainer" containerID="eaea71fadcc53fb59c07f6328f96ab42e253fc28ba5fc37f78a0a7d2bc6d4cc1" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.166155 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ll974" podStartSLOduration=5.327832722 podStartE2EDuration="49.166142509s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="2026-01-03 04:34:41.88957941 +0000 UTC m=+1109.006632595" lastFinishedPulling="2026-01-03 04:35:25.727889157 +0000 UTC m=+1152.844942382" observedRunningTime="2026-01-03 04:35:29.148720707 +0000 UTC m=+1156.265773882" watchObservedRunningTime="2026-01-03 04:35:29.166142509 +0000 UTC m=+1156.283195694" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.182117 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b8664f56d-q48t7" podStartSLOduration=3.182099462 podStartE2EDuration="3.182099462s" podCreationTimestamp="2026-01-03 04:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:29.171780951 +0000 UTC m=+1156.288834136" watchObservedRunningTime="2026-01-03 04:35:29.182099462 +0000 UTC m=+1156.299152647" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.192680 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6rcv2" podStartSLOduration=3.070668236 podStartE2EDuration="49.192661567s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="2026-01-03 04:34:41.959165575 +0000 UTC m=+1109.076218760" lastFinishedPulling="2026-01-03 04:35:28.081158906 +0000 UTC m=+1155.198212091" observedRunningTime="2026-01-03 04:35:29.185331079 +0000 UTC m=+1156.302384284" watchObservedRunningTime="2026-01-03 04:35:29.192661567 +0000 UTC m=+1156.309714752" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.725944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.806203 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:35:29 crc kubenswrapper[4865]: I0103 04:35:29.806772 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="dnsmasq-dns" containerID="cri-o://49f14e0b7d744e734a88eee70f0ba7f4c0ac3f2a3cd4da8e1f4eb38797ff0645" gracePeriod=10 Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.081780 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.143044 4865 generic.go:334] "Generic (PLEG): container finished" podID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerID="49f14e0b7d744e734a88eee70f0ba7f4c0ac3f2a3cd4da8e1f4eb38797ff0645" exitCode=0 Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.143112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" event={"ID":"bef3c2c2-d5de-4735-a7bd-dad385be255c","Type":"ContainerDied","Data":"49f14e0b7d744e734a88eee70f0ba7f4c0ac3f2a3cd4da8e1f4eb38797ff0645"} Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.144357 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerStarted","Data":"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0"} Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.187889 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c8ff89456-njqfs" podUID="565fcd3f-e73a-446a-b862-717cfb106bd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.280237 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.397960 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjbbb\" (UniqueName: \"kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb\") pod \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.398703 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config\") pod \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.399070 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle\") pod \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\" (UID: \"49719dec-6060-4d9b-ad15-fbeac83d7ab1\") " Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.403932 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb" (OuterVolumeSpecName: "kube-api-access-mjbbb") pod "49719dec-6060-4d9b-ad15-fbeac83d7ab1" (UID: "49719dec-6060-4d9b-ad15-fbeac83d7ab1"). InnerVolumeSpecName "kube-api-access-mjbbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.423518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49719dec-6060-4d9b-ad15-fbeac83d7ab1" (UID: "49719dec-6060-4d9b-ad15-fbeac83d7ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.424983 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config" (OuterVolumeSpecName: "config") pod "49719dec-6060-4d9b-ad15-fbeac83d7ab1" (UID: "49719dec-6060-4d9b-ad15-fbeac83d7ab1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.501618 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.501666 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49719dec-6060-4d9b-ad15-fbeac83d7ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:30 crc kubenswrapper[4865]: I0103 04:35:30.501692 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjbbb\" (UniqueName: \"kubernetes.io/projected/49719dec-6060-4d9b-ad15-fbeac83d7ab1-kube-api-access-mjbbb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.022917 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.113953 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.114013 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.114132 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccvlh\" (UniqueName: \"kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.114156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.114198 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.114229 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb\") pod \"bef3c2c2-d5de-4735-a7bd-dad385be255c\" (UID: \"bef3c2c2-d5de-4735-a7bd-dad385be255c\") " Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.126544 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh" (OuterVolumeSpecName: "kube-api-access-ccvlh") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "kube-api-access-ccvlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.166023 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.168073 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.170197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.172180 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.172794 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config" (OuterVolumeSpecName: "config") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.178203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-bd7qz" event={"ID":"bef3c2c2-d5de-4735-a7bd-dad385be255c","Type":"ContainerDied","Data":"959613f4b059f2b0c37704a9c66f97d421e628d3f57668d064cfcb3e2989d8d9"} Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.178256 4865 scope.go:117] "RemoveContainer" containerID="49f14e0b7d744e734a88eee70f0ba7f4c0ac3f2a3cd4da8e1f4eb38797ff0645" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.181054 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bef3c2c2-d5de-4735-a7bd-dad385be255c" (UID: "bef3c2c2-d5de-4735-a7bd-dad385be255c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.182370 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerStarted","Data":"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8"} Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.187897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fhrp6" event={"ID":"49719dec-6060-4d9b-ad15-fbeac83d7ab1","Type":"ContainerDied","Data":"f0feac30f6c07268753b0022f164d7b7e1a0cf97b7a9c142bfbcdc8979567786"} Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.187925 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0feac30f6c07268753b0022f164d7b7e1a0cf97b7a9c142bfbcdc8979567786" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.187971 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fhrp6" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.208483 4865 scope.go:117] "RemoveContainer" containerID="4b3319169d8de7eafdf73a57254319b68fe242ade82e4595fb24857b16835207" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.215772 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.215758683 podStartE2EDuration="5.215758683s" podCreationTimestamp="2026-01-03 04:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:31.211079767 +0000 UTC m=+1158.328132952" watchObservedRunningTime="2026-01-03 04:35:31.215758683 +0000 UTC m=+1158.332811858" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216707 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccvlh\" (UniqueName: \"kubernetes.io/projected/bef3c2c2-d5de-4735-a7bd-dad385be255c-kube-api-access-ccvlh\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216735 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216746 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216755 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216766 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.216775 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef3c2c2-d5de-4735-a7bd-dad385be255c-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.476445 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:31 crc kubenswrapper[4865]: E0103 04:35:31.477264 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="init" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477283 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="init" Jan 03 04:35:31 crc kubenswrapper[4865]: E0103 04:35:31.477304 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="dnsmasq-dns" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477311 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="dnsmasq-dns" Jan 03 04:35:31 crc kubenswrapper[4865]: E0103 04:35:31.477332 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49719dec-6060-4d9b-ad15-fbeac83d7ab1" containerName="neutron-db-sync" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477340 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="49719dec-6060-4d9b-ad15-fbeac83d7ab1" containerName="neutron-db-sync" Jan 03 04:35:31 crc kubenswrapper[4865]: E0103 04:35:31.477348 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-log" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477355 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-log" Jan 03 04:35:31 crc kubenswrapper[4865]: E0103 04:35:31.477364 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-httpd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477370 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-httpd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477557 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-log" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477589 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" containerName="glance-httpd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477600 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" containerName="dnsmasq-dns" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.477619 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="49719dec-6060-4d9b-ad15-fbeac83d7ab1" containerName="neutron-db-sync" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.478444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.522423 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.582919 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.585192 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.587879 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wk8cr" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.588569 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.588851 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.589005 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.646490 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.646677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srw7j\" (UniqueName: \"kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.646751 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.646894 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.647030 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.647095 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.647269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.665299 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-bd7qz"] Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.682525 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749149 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749197 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn26n\" (UniqueName: \"kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749282 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srw7j\" (UniqueName: \"kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749372 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749404 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749431 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749446 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.749461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.750231 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.750767 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.751294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.752173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.752252 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.767828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srw7j\" (UniqueName: \"kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j\") pod \"dnsmasq-dns-84b966f6c9-vtbvd\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.809809 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.850924 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.850989 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.851012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.851064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.851115 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn26n\" (UniqueName: \"kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.856919 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.857600 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.857924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.861267 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.873657 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn26n\" (UniqueName: \"kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n\") pod \"neutron-66b4884f8d-ksw2z\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:31 crc kubenswrapper[4865]: I0103 04:35:31.952638 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:32 crc kubenswrapper[4865]: I0103 04:35:32.279516 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:32 crc kubenswrapper[4865]: W0103 04:35:32.284866 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214be05b_cd95_4e66_b46d_5972d6c66c4e.slice/crio-2dd699ac30f3ec66a876a23e7b8674f0048468737fdb07bd267f92f40bf1d465 WatchSource:0}: Error finding container 2dd699ac30f3ec66a876a23e7b8674f0048468737fdb07bd267f92f40bf1d465: Status 404 returned error can't find the container with id 2dd699ac30f3ec66a876a23e7b8674f0048468737fdb07bd267f92f40bf1d465 Jan 03 04:35:32 crc kubenswrapper[4865]: I0103 04:35:32.555748 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:35:32 crc kubenswrapper[4865]: W0103 04:35:32.600590 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d48a8a3_a23b_4ebd_862d_c95ec0bf070f.slice/crio-551a10e4d3dab9016546dcfa4dc0f234c010bb8cf75e348f76cdfd432ef0b10a WatchSource:0}: Error finding container 551a10e4d3dab9016546dcfa4dc0f234c010bb8cf75e348f76cdfd432ef0b10a: Status 404 returned error can't find the container with id 551a10e4d3dab9016546dcfa4dc0f234c010bb8cf75e348f76cdfd432ef0b10a Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.183603 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef3c2c2-d5de-4735-a7bd-dad385be255c" path="/var/lib/kubelet/pods/bef3c2c2-d5de-4735-a7bd-dad385be255c/volumes" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.271611 4865 generic.go:334] "Generic (PLEG): container finished" podID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" containerID="d05cc379d6f12cb9798baadfdd28c0404c0749045f583f31c7748e0c6f3fc4e4" exitCode=0 Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.271709 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m8g98" event={"ID":"d4902a49-2ac7-4172-9f70-b4b14dfb7d67","Type":"ContainerDied","Data":"d05cc379d6f12cb9798baadfdd28c0404c0749045f583f31c7748e0c6f3fc4e4"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.280497 4865 generic.go:334] "Generic (PLEG): container finished" podID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerID="2adf7be0e68f6f0f1b46891ec9c38ba34887d0d01d2c2416a1c3304d4a80311b" exitCode=0 Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.280569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" event={"ID":"214be05b-cd95-4e66-b46d-5972d6c66c4e","Type":"ContainerDied","Data":"2adf7be0e68f6f0f1b46891ec9c38ba34887d0d01d2c2416a1c3304d4a80311b"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.280595 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" event={"ID":"214be05b-cd95-4e66-b46d-5972d6c66c4e","Type":"ContainerStarted","Data":"2dd699ac30f3ec66a876a23e7b8674f0048468737fdb07bd267f92f40bf1d465"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.282006 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerStarted","Data":"ac07760e3fcc67b9bc34a826fec51c89ba07084899d8c4b104cbc9faf033e78b"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.282029 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerStarted","Data":"694b1e16a154f9a6f3c0e0680b8b829ac7548cf807ecae71fedd48daf6040f07"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.282039 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerStarted","Data":"551a10e4d3dab9016546dcfa4dc0f234c010bb8cf75e348f76cdfd432ef0b10a"} Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.282714 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.379022 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66b4884f8d-ksw2z" podStartSLOduration=2.379004435 podStartE2EDuration="2.379004435s" podCreationTimestamp="2026-01-03 04:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:33.369874118 +0000 UTC m=+1160.486927303" watchObservedRunningTime="2026-01-03 04:35:33.379004435 +0000 UTC m=+1160.496057620" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.925945 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b946bd96f-ph9x6"] Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.927709 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.930000 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.930295 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 03 04:35:33 crc kubenswrapper[4865]: I0103 04:35:33.939026 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b946bd96f-ph9x6"] Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.104700 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-combined-ca-bundle\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.104796 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-public-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.105085 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-httpd-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.105131 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-ovndb-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.105157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-internal-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.105233 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.105293 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7tp\" (UniqueName: \"kubernetes.io/projected/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-kube-api-access-2w7tp\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207070 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-public-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207214 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-httpd-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207246 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-ovndb-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-internal-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207409 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7tp\" (UniqueName: \"kubernetes.io/projected/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-kube-api-access-2w7tp\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.207484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-combined-ca-bundle\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.213319 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-httpd-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.213606 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-combined-ca-bundle\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.217602 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-internal-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.218186 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-config\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.218851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-public-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.220179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-ovndb-tls-certs\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.231625 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7tp\" (UniqueName: \"kubernetes.io/projected/a52305ce-1bb8-4ff4-9d6b-0cf652186e17-kube-api-access-2w7tp\") pod \"neutron-7b946bd96f-ph9x6\" (UID: \"a52305ce-1bb8-4ff4-9d6b-0cf652186e17\") " pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.290452 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.290538 4865 generic.go:334] "Generic (PLEG): container finished" podID="d16da42a-8750-476c-abdf-8054eca2694a" containerID="d163b9dd6c3d565976bc6cde2e6995b55277335a94fb0d5d3fb7e0adf7050300" exitCode=0 Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.290606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6rcv2" event={"ID":"d16da42a-8750-476c-abdf-8054eca2694a","Type":"ContainerDied","Data":"d163b9dd6c3d565976bc6cde2e6995b55277335a94fb0d5d3fb7e0adf7050300"} Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.292638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" event={"ID":"214be05b-cd95-4e66-b46d-5972d6c66c4e","Type":"ContainerStarted","Data":"32cb3a2b7ffb2ed1f2648473f0d583a651e2b2d35fe09a6c7ca25dfc1fec878c"} Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.292882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.339018 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" podStartSLOduration=3.338999872 podStartE2EDuration="3.338999872s" podCreationTimestamp="2026-01-03 04:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:34.331846358 +0000 UTC m=+1161.448899543" watchObservedRunningTime="2026-01-03 04:35:34.338999872 +0000 UTC m=+1161.456053057" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.623574 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m8g98" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.716510 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzhwx\" (UniqueName: \"kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx\") pod \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.716994 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle\") pod \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.717058 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data\") pod \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.717601 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts\") pod \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.717634 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs\") pod \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\" (UID: \"d4902a49-2ac7-4172-9f70-b4b14dfb7d67\") " Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.718315 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs" (OuterVolumeSpecName: "logs") pod "d4902a49-2ac7-4172-9f70-b4b14dfb7d67" (UID: "d4902a49-2ac7-4172-9f70-b4b14dfb7d67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.719034 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.722245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx" (OuterVolumeSpecName: "kube-api-access-dzhwx") pod "d4902a49-2ac7-4172-9f70-b4b14dfb7d67" (UID: "d4902a49-2ac7-4172-9f70-b4b14dfb7d67"). InnerVolumeSpecName "kube-api-access-dzhwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.726882 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts" (OuterVolumeSpecName: "scripts") pod "d4902a49-2ac7-4172-9f70-b4b14dfb7d67" (UID: "d4902a49-2ac7-4172-9f70-b4b14dfb7d67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.750336 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4902a49-2ac7-4172-9f70-b4b14dfb7d67" (UID: "d4902a49-2ac7-4172-9f70-b4b14dfb7d67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.755330 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data" (OuterVolumeSpecName: "config-data") pod "d4902a49-2ac7-4172-9f70-b4b14dfb7d67" (UID: "d4902a49-2ac7-4172-9f70-b4b14dfb7d67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.821197 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.821551 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.821561 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.821569 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzhwx\" (UniqueName: \"kubernetes.io/projected/d4902a49-2ac7-4172-9f70-b4b14dfb7d67-kube-api-access-dzhwx\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:34 crc kubenswrapper[4865]: I0103 04:35:34.899247 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b946bd96f-ph9x6"] Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.302374 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m8g98" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.302569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m8g98" event={"ID":"d4902a49-2ac7-4172-9f70-b4b14dfb7d67","Type":"ContainerDied","Data":"52f49351841b2aa7dc5f24403ae98c6a9cac35a6be71cd7882efffba00ad12d9"} Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.302606 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f49351841b2aa7dc5f24403ae98c6a9cac35a6be71cd7882efffba00ad12d9" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.303989 4865 generic.go:334] "Generic (PLEG): container finished" podID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" containerID="b0289d137493299d875eff99d5805eb8dce604ebf81f0de5dd83d65764fff223" exitCode=0 Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.304112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ll974" event={"ID":"394d36aa-4f2f-4f5f-a904-1fb372f2de27","Type":"ContainerDied","Data":"b0289d137493299d875eff99d5805eb8dce604ebf81f0de5dd83d65764fff223"} Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.469631 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-677444457b-ftr4x"] Jan 03 04:35:35 crc kubenswrapper[4865]: E0103 04:35:35.469959 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" containerName="placement-db-sync" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.469976 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" containerName="placement-db-sync" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.470116 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" containerName="placement-db-sync" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.471018 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.479857 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.479857 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.479925 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8q7vz" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.487473 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.487513 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.489993 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-677444457b-ftr4x"] Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.633596 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4m5\" (UniqueName: \"kubernetes.io/projected/f1045fbc-a935-4634-a207-aa8b027c9768-kube-api-access-ll4m5\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.633895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-config-data\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.633931 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-public-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.633961 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-scripts\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.633999 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1045fbc-a935-4634-a207-aa8b027c9768-logs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.634026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-combined-ca-bundle\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.634046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-internal-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735535 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1045fbc-a935-4634-a207-aa8b027c9768-logs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735596 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-combined-ca-bundle\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735624 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-internal-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735698 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4m5\" (UniqueName: \"kubernetes.io/projected/f1045fbc-a935-4634-a207-aa8b027c9768-kube-api-access-ll4m5\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735716 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-config-data\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735744 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-public-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.735773 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-scripts\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.736465 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1045fbc-a935-4634-a207-aa8b027c9768-logs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.741155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-public-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.741707 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-internal-tls-certs\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.741992 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-config-data\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.742197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-combined-ca-bundle\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.745022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1045fbc-a935-4634-a207-aa8b027c9768-scripts\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.752595 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4m5\" (UniqueName: \"kubernetes.io/projected/f1045fbc-a935-4634-a207-aa8b027c9768-kube-api-access-ll4m5\") pod \"placement-677444457b-ftr4x\" (UID: \"f1045fbc-a935-4634-a207-aa8b027c9768\") " pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:35 crc kubenswrapper[4865]: I0103 04:35:35.797551 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:36 crc kubenswrapper[4865]: I0103 04:35:36.796677 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:36 crc kubenswrapper[4865]: I0103 04:35:36.796738 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:36 crc kubenswrapper[4865]: I0103 04:35:36.845822 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:36 crc kubenswrapper[4865]: I0103 04:35:36.858182 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:37 crc kubenswrapper[4865]: I0103 04:35:37.321661 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:37 crc kubenswrapper[4865]: I0103 04:35:37.321823 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.767540 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.909568 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data\") pod \"d16da42a-8750-476c-abdf-8054eca2694a\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.910083 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle\") pod \"d16da42a-8750-476c-abdf-8054eca2694a\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.910146 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2f9m\" (UniqueName: \"kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m\") pod \"d16da42a-8750-476c-abdf-8054eca2694a\" (UID: \"d16da42a-8750-476c-abdf-8054eca2694a\") " Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.916856 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d16da42a-8750-476c-abdf-8054eca2694a" (UID: "d16da42a-8750-476c-abdf-8054eca2694a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.918022 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m" (OuterVolumeSpecName: "kube-api-access-x2f9m") pod "d16da42a-8750-476c-abdf-8054eca2694a" (UID: "d16da42a-8750-476c-abdf-8054eca2694a"). InnerVolumeSpecName "kube-api-access-x2f9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:38 crc kubenswrapper[4865]: I0103 04:35:38.941515 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d16da42a-8750-476c-abdf-8054eca2694a" (UID: "d16da42a-8750-476c-abdf-8054eca2694a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.012420 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.012458 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16da42a-8750-476c-abdf-8054eca2694a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.012468 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2f9m\" (UniqueName: \"kubernetes.io/projected/d16da42a-8750-476c-abdf-8054eca2694a-kube-api-access-x2f9m\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.168699 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.173402 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.341279 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6rcv2" event={"ID":"d16da42a-8750-476c-abdf-8054eca2694a","Type":"ContainerDied","Data":"cc40d0c54b353e32c830148c43cbb00d5f3966112c668367f24f2b13df8ad48c"} Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.341335 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc40d0c54b353e32c830148c43cbb00d5f3966112c668367f24f2b13df8ad48c" Jan 03 04:35:39 crc kubenswrapper[4865]: I0103 04:35:39.341412 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6rcv2" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.097463 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-588ffb9974-8pr57"] Jan 03 04:35:40 crc kubenswrapper[4865]: E0103 04:35:40.098115 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16da42a-8750-476c-abdf-8054eca2694a" containerName="barbican-db-sync" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.098127 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16da42a-8750-476c-abdf-8054eca2694a" containerName="barbican-db-sync" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.098322 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16da42a-8750-476c-abdf-8054eca2694a" containerName="barbican-db-sync" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.099185 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.109638 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b6fdb99ff-s5qqm"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.111029 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.114530 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.116156 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.116287 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.117038 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pghzp" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.140153 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-588ffb9974-8pr57"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.192941 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b6fdb99ff-s5qqm"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.243666 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d53a478c-ba6a-4210-b219-66540ed365c6-logs\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.243781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data-custom\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.244313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data-custom\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.244392 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2gv\" (UniqueName: \"kubernetes.io/projected/d53a478c-ba6a-4210-b219-66540ed365c6-kube-api-access-td2gv\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245134 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgch\" (UniqueName: \"kubernetes.io/projected/e6d49d7a-9faf-486d-a98d-4067f581c56c-kube-api-access-9lgch\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245280 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d49d7a-9faf-486d-a98d-4067f581c56c-logs\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-combined-ca-bundle\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-combined-ca-bundle\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.245350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348614 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data-custom\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td2gv\" (UniqueName: \"kubernetes.io/projected/d53a478c-ba6a-4210-b219-66540ed365c6-kube-api-access-td2gv\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348766 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lgch\" (UniqueName: \"kubernetes.io/projected/e6d49d7a-9faf-486d-a98d-4067f581c56c-kube-api-access-9lgch\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348791 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d49d7a-9faf-486d-a98d-4067f581c56c-logs\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348806 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-combined-ca-bundle\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348822 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-combined-ca-bundle\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348842 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d53a478c-ba6a-4210-b219-66540ed365c6-logs\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.348921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data-custom\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.350771 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d49d7a-9faf-486d-a98d-4067f581c56c-logs\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.359753 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d53a478c-ba6a-4210-b219-66540ed365c6-logs\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.365022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data-custom\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.370980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data-custom\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.378030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-combined-ca-bundle\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.382964 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d49d7a-9faf-486d-a98d-4067f581c56c-config-data\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.386446 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.386727 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="dnsmasq-dns" containerID="cri-o://32cb3a2b7ffb2ed1f2648473f0d583a651e2b2d35fe09a6c7ca25dfc1fec878c" gracePeriod=10 Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.387971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-combined-ca-bundle\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.389219 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.397904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53a478c-ba6a-4210-b219-66540ed365c6-config-data\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.399575 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td2gv\" (UniqueName: \"kubernetes.io/projected/d53a478c-ba6a-4210-b219-66540ed365c6-kube-api-access-td2gv\") pod \"barbican-worker-5b6fdb99ff-s5qqm\" (UID: \"d53a478c-ba6a-4210-b219-66540ed365c6\") " pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.400475 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.401974 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.405285 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b946bd96f-ph9x6" event={"ID":"a52305ce-1bb8-4ff4-9d6b-0cf652186e17","Type":"ContainerStarted","Data":"7b19f8acd46ab2308e0cab3b0b15a83c4bb3e4163340311f449500ff58c7d6e0"} Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.414963 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ll974" event={"ID":"394d36aa-4f2f-4f5f-a904-1fb372f2de27","Type":"ContainerDied","Data":"10345bebc05487832f0a76ce5034622ed24c0fec8372150b4688c25ca1f7b5ed"} Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.415014 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10345bebc05487832f0a76ce5034622ed24c0fec8372150b4688c25ca1f7b5ed" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.426323 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lgch\" (UniqueName: \"kubernetes.io/projected/e6d49d7a-9faf-486d-a98d-4067f581c56c-kube-api-access-9lgch\") pod \"barbican-keystone-listener-588ffb9974-8pr57\" (UID: \"e6d49d7a-9faf-486d-a98d-4067f581c56c\") " pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.429555 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.444662 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.446477 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.448425 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450574 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450606 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dpz\" (UniqueName: \"kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450652 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450715 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450808 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4d8\" (UniqueName: \"kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.450832 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.466520 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.491692 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.497660 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ll974" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553188 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553591 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dpz\" (UniqueName: \"kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553634 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553658 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.553976 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.554057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.554114 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.554147 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.554201 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g4d8\" (UniqueName: \"kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.554242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.555203 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.558327 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.558563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.559044 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.563562 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.564204 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.564230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.575420 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.577740 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dpz\" (UniqueName: \"kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz\") pod \"dnsmasq-dns-75c8ddd69c-smns6\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.577849 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g4d8\" (UniqueName: \"kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.578630 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom\") pod \"barbican-api-56cb66bf5b-6mg8w\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655524 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655704 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g45l\" (UniqueName: \"kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.655854 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id\") pod \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\" (UID: \"394d36aa-4f2f-4f5f-a904-1fb372f2de27\") " Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.656136 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.656312 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/394d36aa-4f2f-4f5f-a904-1fb372f2de27-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.659601 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts" (OuterVolumeSpecName: "scripts") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.659630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l" (OuterVolumeSpecName: "kube-api-access-2g45l") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "kube-api-access-2g45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.671505 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.692538 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.706264 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.735909 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data" (OuterVolumeSpecName: "config-data") pod "394d36aa-4f2f-4f5f-a904-1fb372f2de27" (UID: "394d36aa-4f2f-4f5f-a904-1fb372f2de27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.758251 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.758296 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.758306 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.758314 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394d36aa-4f2f-4f5f-a904-1fb372f2de27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.758324 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g45l\" (UniqueName: \"kubernetes.io/projected/394d36aa-4f2f-4f5f-a904-1fb372f2de27-kube-api-access-2g45l\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.816344 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:40 crc kubenswrapper[4865]: I0103 04:35:40.836325 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.474843 4865 generic.go:334] "Generic (PLEG): container finished" podID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerID="32cb3a2b7ffb2ed1f2648473f0d583a651e2b2d35fe09a6c7ca25dfc1fec878c" exitCode=0 Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.475297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" event={"ID":"214be05b-cd95-4e66-b46d-5972d6c66c4e","Type":"ContainerDied","Data":"32cb3a2b7ffb2ed1f2648473f0d583a651e2b2d35fe09a6c7ca25dfc1fec878c"} Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.475368 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ll974" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.864616 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.892798 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:41 crc kubenswrapper[4865]: E0103 04:35:41.893329 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" containerName="cinder-db-sync" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.893566 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" containerName="cinder-db-sync" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.893822 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" containerName="cinder-db-sync" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.894785 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.893857 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.900459 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-smtl4" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.900680 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.900985 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.901081 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.906834 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:35:41 crc kubenswrapper[4865]: E0103 04:35:41.914916 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="init" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.914991 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="init" Jan 03 04:35:41 crc kubenswrapper[4865]: E0103 04:35:41.915054 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="dnsmasq-dns" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.915100 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="dnsmasq-dns" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.915340 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="dnsmasq-dns" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.921583 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.942047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:41 crc kubenswrapper[4865]: I0103 04:35:41.956534 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.094987 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.095446 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.095533 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.095589 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srw7j\" (UniqueName: \"kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.095618 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.095697 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc\") pod \"214be05b-cd95-4e66-b46d-5972d6c66c4e\" (UID: \"214be05b-cd95-4e66-b46d-5972d6c66c4e\") " Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096193 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096246 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qlm\" (UniqueName: \"kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096288 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096784 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqrd\" (UniqueName: \"kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096829 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096868 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096903 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096932 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.096960 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.130663 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j" (OuterVolumeSpecName: "kube-api-access-srw7j") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "kube-api-access-srw7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.153774 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.164704 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config" (OuterVolumeSpecName: "config") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.180458 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.181042 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216329 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216646 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216717 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216797 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.216984 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217047 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qlm\" (UniqueName: \"kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqrd\" (UniqueName: \"kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217461 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217541 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srw7j\" (UniqueName: \"kubernetes.io/projected/214be05b-cd95-4e66-b46d-5972d6c66c4e-kube-api-access-srw7j\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217599 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217653 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.217703 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.220799 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.221362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.221878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.225394 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.225519 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.225977 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.233617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "214be05b-cd95-4e66-b46d-5972d6c66c4e" (UID: "214be05b-cd95-4e66-b46d-5972d6c66c4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.239959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.242319 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.244064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.245413 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqrd\" (UniqueName: \"kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.246058 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc\") pod \"dnsmasq-dns-5784cf869f-g5rpq\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.258818 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qlm\" (UniqueName: \"kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm\") pod \"cinder-scheduler-0\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.285196 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.286955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.289030 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.297537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.308686 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.323447 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-588ffb9974-8pr57"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.324572 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214be05b-cd95-4e66-b46d-5972d6c66c4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.359197 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-677444457b-ftr4x"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.359495 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.389446 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b6fdb99ff-s5qqm"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.396028 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.425544 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.425919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.425946 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.426015 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.426052 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtpmr\" (UniqueName: \"kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.426086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.426123 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.488818 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" event={"ID":"d53a478c-ba6a-4210-b219-66540ed365c6","Type":"ContainerStarted","Data":"5e868aaa6393a411326c08673f28ddc919b285505fc7458fbf43d0c751a2644f"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.491293 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" event={"ID":"f1826d83-e283-4366-99b4-dd941cf702d2","Type":"ContainerStarted","Data":"bb2fdf371b4dfdd005ec8dbe46503a454a544a377f2d7c150ac79eb9aaae5ffb"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.493028 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerStarted","Data":"4640ad75ff61ba17e2f22859f8ba1290e2ce8ab9b7d4150e51dceaeae898d5df"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.495948 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerStarted","Data":"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.496013 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.496016 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-central-agent" containerID="cri-o://57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630" gracePeriod=30 Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.496078 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="sg-core" containerID="cri-o://c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26" gracePeriod=30 Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.496114 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-notification-agent" containerID="cri-o://18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a" gracePeriod=30 Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.496041 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="proxy-httpd" containerID="cri-o://6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611" gracePeriod=30 Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.501598 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.501678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" event={"ID":"214be05b-cd95-4e66-b46d-5972d6c66c4e","Type":"ContainerDied","Data":"2dd699ac30f3ec66a876a23e7b8674f0048468737fdb07bd267f92f40bf1d465"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.501714 4865 scope.go:117] "RemoveContainer" containerID="32cb3a2b7ffb2ed1f2648473f0d583a651e2b2d35fe09a6c7ca25dfc1fec878c" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.517611 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" event={"ID":"e6d49d7a-9faf-486d-a98d-4067f581c56c","Type":"ContainerStarted","Data":"21de548077b2bc6d98b7106b4b8c489392867ad8f4d588b22425eb7651244417"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.520069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527191 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527227 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527297 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527321 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtpmr\" (UniqueName: \"kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.527355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.528332 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.529646 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b946bd96f-ph9x6" event={"ID":"a52305ce-1bb8-4ff4-9d6b-0cf652186e17","Type":"ContainerStarted","Data":"e2a5f96b2e180ac44b97e04b80af3ce72f6b6399946e0e9ed09d520e172704c9"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.530598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.536460 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.920106135 podStartE2EDuration="1m2.536447473s" podCreationTimestamp="2026-01-03 04:34:40 +0000 UTC" firstStartedPulling="2026-01-03 04:34:41.890960947 +0000 UTC m=+1109.008014132" lastFinishedPulling="2026-01-03 04:35:41.507302285 +0000 UTC m=+1168.624355470" observedRunningTime="2026-01-03 04:35:42.516501483 +0000 UTC m=+1169.633554678" watchObservedRunningTime="2026-01-03 04:35:42.536447473 +0000 UTC m=+1169.653500658" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.540977 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.556338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-677444457b-ftr4x" event={"ID":"f1045fbc-a935-4634-a207-aa8b027c9768","Type":"ContainerStarted","Data":"9e7d12e5163e42b70fe449f853040013a60ee18dda3797cb1e59f4935a8b3cff"} Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.556418 4865 scope.go:117] "RemoveContainer" containerID="2adf7be0e68f6f0f1b46891ec9c38ba34887d0d01d2c2416a1c3304d4a80311b" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.557665 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.562081 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.569423 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.573866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.576260 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-vtbvd"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.588727 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtpmr\" (UniqueName: \"kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr\") pod \"cinder-api-0\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.632211 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.699103 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.834008 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:35:42 crc kubenswrapper[4865]: I0103 04:35:42.837285 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.072786 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:43 crc kubenswrapper[4865]: W0103 04:35:43.198180 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97e54ce5_3373_4e17_ad57_63f10ed6b1fe.slice/crio-6babdd760833b9f21a93b69be3d429676f4e7ecc3176abe516f755f0633c9304 WatchSource:0}: Error finding container 6babdd760833b9f21a93b69be3d429676f4e7ecc3176abe516f755f0633c9304: Status 404 returned error can't find the container with id 6babdd760833b9f21a93b69be3d429676f4e7ecc3176abe516f755f0633c9304 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.206363 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" path="/var/lib/kubelet/pods/214be05b-cd95-4e66-b46d-5972d6c66c4e/volumes" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.206947 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.565693 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerStarted","Data":"8a2751f0523a99ced43c0f54dd621c55679ffeeabfe6f783a357f125a6d0923f"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.565756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerStarted","Data":"43ec9ab91bafd73083557b98586e5bc4367c506e25f910505795aab4d4ae075e"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.566801 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.566841 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569803 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerID="6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611" exitCode=0 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569824 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerID="c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26" exitCode=2 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569831 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerID="57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630" exitCode=0 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569858 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerDied","Data":"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569873 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerDied","Data":"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.569883 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerDied","Data":"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.571088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerStarted","Data":"6babdd760833b9f21a93b69be3d429676f4e7ecc3176abe516f755f0633c9304"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.573188 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-677444457b-ftr4x" event={"ID":"f1045fbc-a935-4634-a207-aa8b027c9768","Type":"ContainerStarted","Data":"6612135e783e7127d4aa68edc36c784ce8a6733a7eea34d243aa6d22596e60fd"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.573214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-677444457b-ftr4x" event={"ID":"f1045fbc-a935-4634-a207-aa8b027c9768","Type":"ContainerStarted","Data":"d09156d90ee431007b2f4a8a8ee8c92380b7420f33f19e4e2715ab7ee3492f5c"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.573645 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.573671 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.576713 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerStarted","Data":"824ae6bece0810c62295628968ce2b6a15f65e73650c05fc5ed0c0444f7f0acd"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.580647 4865 generic.go:334] "Generic (PLEG): container finished" podID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerID="37cf9cce850081fb582dd3e5eed295d0b1eeecda7e2fe4b52399c1ddf8514a14" exitCode=0 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.580687 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" event={"ID":"5f18b027-fcad-4bf3-87e1-958c0672c8e5","Type":"ContainerDied","Data":"37cf9cce850081fb582dd3e5eed295d0b1eeecda7e2fe4b52399c1ddf8514a14"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.580702 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" event={"ID":"5f18b027-fcad-4bf3-87e1-958c0672c8e5","Type":"ContainerStarted","Data":"292a2697f5211ff0f26bfb44c76a948b7d0e86bc60b294b2bcf47e0ea986f1c0"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.582298 4865 generic.go:334] "Generic (PLEG): container finished" podID="f1826d83-e283-4366-99b4-dd941cf702d2" containerID="cf078c96445f65dca36f43334dd71c658db881b52daf3bbdf517215f7bccb2bd" exitCode=0 Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.582337 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" event={"ID":"f1826d83-e283-4366-99b4-dd941cf702d2","Type":"ContainerDied","Data":"cf078c96445f65dca36f43334dd71c658db881b52daf3bbdf517215f7bccb2bd"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.598139 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56cb66bf5b-6mg8w" podStartSLOduration=3.5981197590000003 podStartE2EDuration="3.598119759s" podCreationTimestamp="2026-01-03 04:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:43.583479922 +0000 UTC m=+1170.700533117" watchObservedRunningTime="2026-01-03 04:35:43.598119759 +0000 UTC m=+1170.715172944" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.609890 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b946bd96f-ph9x6" event={"ID":"a52305ce-1bb8-4ff4-9d6b-0cf652186e17","Type":"ContainerStarted","Data":"f2fd7fbda77cdd4745e8e55f1acb0e6e6f382b41c0b33b437be74fc0802acf2e"} Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.644411 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-677444457b-ftr4x" podStartSLOduration=8.644367519 podStartE2EDuration="8.644367519s" podCreationTimestamp="2026-01-03 04:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:43.63887088 +0000 UTC m=+1170.755924075" watchObservedRunningTime="2026-01-03 04:35:43.644367519 +0000 UTC m=+1170.761420704" Jan 03 04:35:43 crc kubenswrapper[4865]: I0103 04:35:43.690108 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b946bd96f-ph9x6" podStartSLOduration=10.690091355 podStartE2EDuration="10.690091355s" podCreationTimestamp="2026-01-03 04:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:43.685887751 +0000 UTC m=+1170.802940936" watchObservedRunningTime="2026-01-03 04:35:43.690091355 +0000 UTC m=+1170.807144540" Jan 03 04:35:44 crc kubenswrapper[4865]: I0103 04:35:44.602109 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c8ff89456-njqfs" Jan 03 04:35:44 crc kubenswrapper[4865]: I0103 04:35:44.618263 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:35:44 crc kubenswrapper[4865]: I0103 04:35:44.619260 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:35:44 crc kubenswrapper[4865]: I0103 04:35:44.721717 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.304991 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.311012 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397178 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397209 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397306 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397359 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.397436 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6dpz\" (UniqueName: \"kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz\") pod \"f1826d83-e283-4366-99b4-dd941cf702d2\" (UID: \"f1826d83-e283-4366-99b4-dd941cf702d2\") " Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.421153 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz" (OuterVolumeSpecName: "kube-api-access-t6dpz") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "kube-api-access-t6dpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.423283 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.426512 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.431271 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.431922 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config" (OuterVolumeSpecName: "config") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.444715 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1826d83-e283-4366-99b4-dd941cf702d2" (UID: "f1826d83-e283-4366-99b4-dd941cf702d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499497 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499532 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6dpz\" (UniqueName: \"kubernetes.io/projected/f1826d83-e283-4366-99b4-dd941cf702d2-kube-api-access-t6dpz\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499548 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499561 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499575 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.499588 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1826d83-e283-4366-99b4-dd941cf702d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.629351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerStarted","Data":"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec"} Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.630653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" event={"ID":"5f18b027-fcad-4bf3-87e1-958c0672c8e5","Type":"ContainerStarted","Data":"8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811"} Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.630963 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.632188 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.632434 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-smns6" event={"ID":"f1826d83-e283-4366-99b4-dd941cf702d2","Type":"ContainerDied","Data":"bb2fdf371b4dfdd005ec8dbe46503a454a544a377f2d7c150ac79eb9aaae5ffb"} Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.644196 4865 scope.go:117] "RemoveContainer" containerID="cf078c96445f65dca36f43334dd71c658db881b52daf3bbdf517215f7bccb2bd" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.644625 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon-log" containerID="cri-o://90c28fa2ba55e8b0146141174ba280b6b86aa0c44d4d082839188cf483036d50" gracePeriod=30 Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.644794 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" containerID="cri-o://e8c51f3df0fa17191585fbd98de1dd756c10077859e51459e1c8c79c4ab744ad" gracePeriod=30 Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.669157 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" podStartSLOduration=4.669139037 podStartE2EDuration="4.669139037s" podCreationTimestamp="2026-01-03 04:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:45.668998133 +0000 UTC m=+1172.786051328" watchObservedRunningTime="2026-01-03 04:35:45.669139037 +0000 UTC m=+1172.786192222" Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.711808 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:45 crc kubenswrapper[4865]: I0103 04:35:45.720635 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-smns6"] Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.649001 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" event={"ID":"e6d49d7a-9faf-486d-a98d-4067f581c56c","Type":"ContainerStarted","Data":"26006b30bbd8a8fac2e1d0536ecade653a6feba1d1defbc58da42fd3b7ffe9f1"} Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.653828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" event={"ID":"d53a478c-ba6a-4210-b219-66540ed365c6","Type":"ContainerStarted","Data":"3fa511168b603468f01f0c6a4c6ecc1653a1e8097693eff4c72e0fae5dc84bf9"} Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.653882 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" event={"ID":"d53a478c-ba6a-4210-b219-66540ed365c6","Type":"ContainerStarted","Data":"329540db64f9055e612cc940ce09d3ad8c630ece35acc9702d1473d25b9ad420"} Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.686265 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b6fdb99ff-s5qqm" podStartSLOduration=3.468254518 podStartE2EDuration="6.686245768s" podCreationTimestamp="2026-01-03 04:35:40 +0000 UTC" firstStartedPulling="2026-01-03 04:35:42.394873996 +0000 UTC m=+1169.511927181" lastFinishedPulling="2026-01-03 04:35:45.612865236 +0000 UTC m=+1172.729918431" observedRunningTime="2026-01-03 04:35:46.685243732 +0000 UTC m=+1173.802296917" watchObservedRunningTime="2026-01-03 04:35:46.686245768 +0000 UTC m=+1173.803298953" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.808869 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5974c4f958-8d8f6"] Jan 03 04:35:46 crc kubenswrapper[4865]: E0103 04:35:46.809547 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1826d83-e283-4366-99b4-dd941cf702d2" containerName="init" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.809563 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1826d83-e283-4366-99b4-dd941cf702d2" containerName="init" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.809748 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1826d83-e283-4366-99b4-dd941cf702d2" containerName="init" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.810719 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.815731 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.815947 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.832413 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-vtbvd" podUID="214be05b-cd95-4e66-b46d-5972d6c66c4e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.837916 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5974c4f958-8d8f6"] Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.932355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.932612 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-logs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.932763 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-combined-ca-bundle\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.932866 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-public-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.932987 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kp5p\" (UniqueName: \"kubernetes.io/projected/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-kube-api-access-7kp5p\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.933085 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data-custom\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:46 crc kubenswrapper[4865]: I0103 04:35:46.933223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-internal-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034560 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034619 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-logs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034679 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-combined-ca-bundle\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034703 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-public-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kp5p\" (UniqueName: \"kubernetes.io/projected/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-kube-api-access-7kp5p\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034754 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data-custom\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.034784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-internal-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.035624 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-logs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.041408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-public-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.043785 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-internal-tls-certs\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.044226 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-combined-ca-bundle\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.044674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.045782 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-config-data-custom\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.053885 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kp5p\" (UniqueName: \"kubernetes.io/projected/9890e74a-cb62-411d-8cf0-ce88ffcc73e0-kube-api-access-7kp5p\") pod \"barbican-api-5974c4f958-8d8f6\" (UID: \"9890e74a-cb62-411d-8cf0-ce88ffcc73e0\") " pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.171272 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1826d83-e283-4366-99b4-dd941cf702d2" path="/var/lib/kubelet/pods/f1826d83-e283-4366-99b4-dd941cf702d2/volumes" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.182508 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.520687 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652622 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652746 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652891 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzcfp\" (UniqueName: \"kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.652976 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd\") pod \"ec473391-48f3-447b-bcd5-bbee75aa85a4\" (UID: \"ec473391-48f3-447b-bcd5-bbee75aa85a4\") " Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.653447 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.653907 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.660335 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts" (OuterVolumeSpecName: "scripts") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.660421 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp" (OuterVolumeSpecName: "kube-api-access-lzcfp") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "kube-api-access-lzcfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.671640 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5974c4f958-8d8f6"] Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.675512 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerID="18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a" exitCode=0 Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.675561 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerDied","Data":"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.675583 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec473391-48f3-447b-bcd5-bbee75aa85a4","Type":"ContainerDied","Data":"c47cffdeea4c6d344246b555e2799fce55205d4b8d9c4cdc90129b840b544823"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.675599 4865 scope.go:117] "RemoveContainer" containerID="6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.675709 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.679154 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" event={"ID":"e6d49d7a-9faf-486d-a98d-4067f581c56c","Type":"ContainerStarted","Data":"60dcce7325b0f1738d0de2e1d401dbaa2a674a34b6e5f36a4b85fd9a5fe0330a"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.681623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.687896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerStarted","Data":"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.688105 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api-log" containerID="cri-o://3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" gracePeriod=30 Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.688419 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api" containerID="cri-o://3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" gracePeriod=30 Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.693647 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerStarted","Data":"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.693736 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerStarted","Data":"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7"} Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.706398 4865 scope.go:117] "RemoveContainer" containerID="c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.716339 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-588ffb9974-8pr57" podStartSLOduration=4.281326296 podStartE2EDuration="7.716324921s" podCreationTimestamp="2026-01-03 04:35:40 +0000 UTC" firstStartedPulling="2026-01-03 04:35:42.252889358 +0000 UTC m=+1169.369942543" lastFinishedPulling="2026-01-03 04:35:45.687887983 +0000 UTC m=+1172.804941168" observedRunningTime="2026-01-03 04:35:47.701395118 +0000 UTC m=+1174.818448323" watchObservedRunningTime="2026-01-03 04:35:47.716324921 +0000 UTC m=+1174.833378106" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.740264 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.740243867 podStartE2EDuration="5.740243867s" podCreationTimestamp="2026-01-03 04:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:47.726660891 +0000 UTC m=+1174.843714076" watchObservedRunningTime="2026-01-03 04:35:47.740243867 +0000 UTC m=+1174.857297062" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.755099 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.755122 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.755132 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzcfp\" (UniqueName: \"kubernetes.io/projected/ec473391-48f3-447b-bcd5-bbee75aa85a4-kube-api-access-lzcfp\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.755141 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.755149 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec473391-48f3-447b-bcd5-bbee75aa85a4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.756492 4865 scope.go:117] "RemoveContainer" containerID="18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.758448 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.23304384 podStartE2EDuration="6.758433339s" podCreationTimestamp="2026-01-03 04:35:41 +0000 UTC" firstStartedPulling="2026-01-03 04:35:43.086517791 +0000 UTC m=+1170.203570966" lastFinishedPulling="2026-01-03 04:35:45.61190728 +0000 UTC m=+1172.728960465" observedRunningTime="2026-01-03 04:35:47.748541612 +0000 UTC m=+1174.865594797" watchObservedRunningTime="2026-01-03 04:35:47.758433339 +0000 UTC m=+1174.875486524" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.793213 4865 scope.go:117] "RemoveContainer" containerID="57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.804555 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.807285 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data" (OuterVolumeSpecName: "config-data") pod "ec473391-48f3-447b-bcd5-bbee75aa85a4" (UID: "ec473391-48f3-447b-bcd5-bbee75aa85a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.841495 4865 scope.go:117] "RemoveContainer" containerID="6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611" Jan 03 04:35:47 crc kubenswrapper[4865]: E0103 04:35:47.841901 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611\": container with ID starting with 6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611 not found: ID does not exist" containerID="6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.841938 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611"} err="failed to get container status \"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611\": rpc error: code = NotFound desc = could not find container \"6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611\": container with ID starting with 6e0c838e7ae2de58c950d7fa4d1fad76010d77725348b2e246083f2eea380611 not found: ID does not exist" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.841982 4865 scope.go:117] "RemoveContainer" containerID="c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26" Jan 03 04:35:47 crc kubenswrapper[4865]: E0103 04:35:47.842275 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26\": container with ID starting with c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26 not found: ID does not exist" containerID="c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.842304 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26"} err="failed to get container status \"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26\": rpc error: code = NotFound desc = could not find container \"c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26\": container with ID starting with c18866be2cbdaac9e456894351da2f61cd86074ed25cdbc8ef9cb8a7f673ef26 not found: ID does not exist" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.842325 4865 scope.go:117] "RemoveContainer" containerID="18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a" Jan 03 04:35:47 crc kubenswrapper[4865]: E0103 04:35:47.842559 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a\": container with ID starting with 18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a not found: ID does not exist" containerID="18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.842593 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a"} err="failed to get container status \"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a\": rpc error: code = NotFound desc = could not find container \"18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a\": container with ID starting with 18bf85d9b3dff38f954a18c04520875cf952099e26a61af56e59c6f1577fa12a not found: ID does not exist" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.842620 4865 scope.go:117] "RemoveContainer" containerID="57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630" Jan 03 04:35:47 crc kubenswrapper[4865]: E0103 04:35:47.842861 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630\": container with ID starting with 57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630 not found: ID does not exist" containerID="57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.842934 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630"} err="failed to get container status \"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630\": rpc error: code = NotFound desc = could not find container \"57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630\": container with ID starting with 57da9ec468b35e68807bd947ff3a238030c492730e1787e9949808fef3e9b630 not found: ID does not exist" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.860297 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:47 crc kubenswrapper[4865]: I0103 04:35:47.860316 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec473391-48f3-447b-bcd5-bbee75aa85a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.022354 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.036432 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.064427 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.064819 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="proxy-httpd" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.064836 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="proxy-httpd" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.064864 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-central-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.064871 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-central-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.064885 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-notification-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.064891 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-notification-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.064901 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="sg-core" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.064906 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="sg-core" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.065058 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="sg-core" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.065077 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-notification-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.065095 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="proxy-httpd" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.065106 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" containerName="ceilometer-central-agent" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.066612 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.070152 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.070242 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.076713 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167293 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167442 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167493 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167543 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.167571 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv8w\" (UniqueName: \"kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.271890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxv8w\" (UniqueName: \"kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.271996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.272044 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.272110 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.272187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.272270 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.272316 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.273423 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.273586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.278500 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.278884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.279122 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.279431 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.301335 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxv8w\" (UniqueName: \"kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w\") pod \"ceilometer-0\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.377625 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.391897 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475095 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475507 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475546 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475618 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475671 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtpmr\" (UniqueName: \"kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.475748 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs\") pod \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\" (UID: \"97e54ce5-3373-4e17-ad57-63f10ed6b1fe\") " Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.476322 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs" (OuterVolumeSpecName: "logs") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.478926 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.482052 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts" (OuterVolumeSpecName: "scripts") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.483454 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.487468 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr" (OuterVolumeSpecName: "kube-api-access-vtpmr") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "kube-api-access-vtpmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.522191 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.539559 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data" (OuterVolumeSpecName: "config-data") pod "97e54ce5-3373-4e17-ad57-63f10ed6b1fe" (UID: "97e54ce5-3373-4e17-ad57-63f10ed6b1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577597 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577628 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577640 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577649 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577661 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtpmr\" (UniqueName: \"kubernetes.io/projected/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-kube-api-access-vtpmr\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577669 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.577676 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97e54ce5-3373-4e17-ad57-63f10ed6b1fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705729 4865 generic.go:334] "Generic (PLEG): container finished" podID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerID="3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" exitCode=0 Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705760 4865 generic.go:334] "Generic (PLEG): container finished" podID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerID="3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" exitCode=143 Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705789 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705812 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerDied","Data":"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705845 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerDied","Data":"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"97e54ce5-3373-4e17-ad57-63f10ed6b1fe","Type":"ContainerDied","Data":"6babdd760833b9f21a93b69be3d429676f4e7ecc3176abe516f755f0633c9304"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.705869 4865 scope.go:117] "RemoveContainer" containerID="3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.709216 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5974c4f958-8d8f6" event={"ID":"9890e74a-cb62-411d-8cf0-ce88ffcc73e0","Type":"ContainerStarted","Data":"c8573d0a4e5163291f22d9a0dac63259c4d4d471e12eebad807d18a40fb32c0a"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.709246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5974c4f958-8d8f6" event={"ID":"9890e74a-cb62-411d-8cf0-ce88ffcc73e0","Type":"ContainerStarted","Data":"54f4d7d5683d881b504136688640ba91b0c7842016f67817c3397570537f2d11"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.709256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5974c4f958-8d8f6" event={"ID":"9890e74a-cb62-411d-8cf0-ce88ffcc73e0","Type":"ContainerStarted","Data":"136752d45544ba0c5c8bef31d8e2b0fc38da6d452da76f6714e1e21fad4c8f20"} Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.709354 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.709393 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.757241 4865 scope.go:117] "RemoveContainer" containerID="3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.764246 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5974c4f958-8d8f6" podStartSLOduration=2.764230906 podStartE2EDuration="2.764230906s" podCreationTimestamp="2026-01-03 04:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:48.731715486 +0000 UTC m=+1175.848768681" watchObservedRunningTime="2026-01-03 04:35:48.764230906 +0000 UTC m=+1175.881284091" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.766750 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.774105 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.785635 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.786242 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api-log" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.786254 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api-log" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.786272 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.786278 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.786455 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.786476 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" containerName="cinder-api-log" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.787350 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.789934 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.790192 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.790413 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.793334 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.794433 4865 scope.go:117] "RemoveContainer" containerID="3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.794876 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb\": container with ID starting with 3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb not found: ID does not exist" containerID="3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.794903 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb"} err="failed to get container status \"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb\": rpc error: code = NotFound desc = could not find container \"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb\": container with ID starting with 3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb not found: ID does not exist" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.794924 4865 scope.go:117] "RemoveContainer" containerID="3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" Jan 03 04:35:48 crc kubenswrapper[4865]: E0103 04:35:48.795193 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec\": container with ID starting with 3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec not found: ID does not exist" containerID="3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.795210 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec"} err="failed to get container status \"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec\": rpc error: code = NotFound desc = could not find container \"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec\": container with ID starting with 3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec not found: ID does not exist" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.795224 4865 scope.go:117] "RemoveContainer" containerID="3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.795411 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb"} err="failed to get container status \"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb\": rpc error: code = NotFound desc = could not find container \"3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb\": container with ID starting with 3d6918fc97498683ff7151109c7d8e123e596083726a87919f1093346f883acb not found: ID does not exist" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.795424 4865 scope.go:117] "RemoveContainer" containerID="3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.795644 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec"} err="failed to get container status \"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec\": rpc error: code = NotFound desc = could not find container \"3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec\": container with ID starting with 3d31461272756eb83f61e06048aaa01e3f224670721cc4b044581bd2435e3aec not found: ID does not exist" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.855461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.883658 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-scripts\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.883738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ggj\" (UniqueName: \"kubernetes.io/projected/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-kube-api-access-g5ggj\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.883758 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.883787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-logs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.884590 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.884663 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data-custom\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.884699 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.884845 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.884900 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986639 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986708 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-scripts\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986752 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5ggj\" (UniqueName: \"kubernetes.io/projected/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-kube-api-access-g5ggj\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986772 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986797 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-logs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986851 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data-custom\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.986891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.987240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.987512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-logs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.991853 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-scripts\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.992600 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.993244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data-custom\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.993324 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.993966 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-config-data\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:48 crc kubenswrapper[4865]: I0103 04:35:48.994684 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.003015 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5ggj\" (UniqueName: \"kubernetes.io/projected/74a06fab-e04b-4eca-b4b1-a9d69b526c1d-kube-api-access-g5ggj\") pod \"cinder-api-0\" (UID: \"74a06fab-e04b-4eca-b4b1-a9d69b526c1d\") " pod="openstack/cinder-api-0" Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.116107 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.167273 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e54ce5-3373-4e17-ad57-63f10ed6b1fe" path="/var/lib/kubelet/pods/97e54ce5-3373-4e17-ad57-63f10ed6b1fe/volumes" Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.168545 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec473391-48f3-447b-bcd5-bbee75aa85a4" path="/var/lib/kubelet/pods/ec473391-48f3-447b-bcd5-bbee75aa85a4/volumes" Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.589313 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 03 04:35:49 crc kubenswrapper[4865]: W0103 04:35:49.593993 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74a06fab_e04b_4eca_b4b1_a9d69b526c1d.slice/crio-1c35392a0eb579ccfb8cc0e6dad921adf2a7395442e06766659e8dcae415656d WatchSource:0}: Error finding container 1c35392a0eb579ccfb8cc0e6dad921adf2a7395442e06766659e8dcae415656d: Status 404 returned error can't find the container with id 1c35392a0eb579ccfb8cc0e6dad921adf2a7395442e06766659e8dcae415656d Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.720592 4865 generic.go:334] "Generic (PLEG): container finished" podID="70339b26-8f06-4fe7-821e-cc376084eace" containerID="e8c51f3df0fa17191585fbd98de1dd756c10077859e51459e1c8c79c4ab744ad" exitCode=0 Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.720633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerDied","Data":"e8c51f3df0fa17191585fbd98de1dd756c10077859e51459e1c8c79c4ab744ad"} Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.722012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74a06fab-e04b-4eca-b4b1-a9d69b526c1d","Type":"ContainerStarted","Data":"1c35392a0eb579ccfb8cc0e6dad921adf2a7395442e06766659e8dcae415656d"} Jan 03 04:35:49 crc kubenswrapper[4865]: I0103 04:35:49.723533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerStarted","Data":"0ef16394fb4daa5db20bd96779e589e0b600e08e89269c9631e6bf49af876abf"} Jan 03 04:35:50 crc kubenswrapper[4865]: I0103 04:35:50.080044 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 03 04:35:50 crc kubenswrapper[4865]: I0103 04:35:50.736823 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74a06fab-e04b-4eca-b4b1-a9d69b526c1d","Type":"ContainerStarted","Data":"af473445663733911c2351bf3ef85bcc4e509234b62b75fba4bf9577330f88f9"} Jan 03 04:35:50 crc kubenswrapper[4865]: I0103 04:35:50.738337 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerStarted","Data":"699d7f3e3d36ad24cf1da27f839436542f4e85ad4481784617973f2fc7317aea"} Jan 03 04:35:50 crc kubenswrapper[4865]: I0103 04:35:50.738367 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerStarted","Data":"410f82eb55954cdac6aea54dc914d51106839f1938e901ee09344c39a67eb370"} Jan 03 04:35:51 crc kubenswrapper[4865]: I0103 04:35:51.756940 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74a06fab-e04b-4eca-b4b1-a9d69b526c1d","Type":"ContainerStarted","Data":"dd689c8fbc80bac436d32de31c26eef9b880208f315a745a0d486fef4d2f3183"} Jan 03 04:35:51 crc kubenswrapper[4865]: I0103 04:35:51.757422 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 03 04:35:51 crc kubenswrapper[4865]: I0103 04:35:51.785075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerStarted","Data":"2365eb41d15fcfaa004c56e16bff6d7d0f87c0bfee23b0738131f319022b5e03"} Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.245028 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.273021 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.272976815 podStartE2EDuration="4.272976815s" podCreationTimestamp="2026-01-03 04:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:35:51.78297008 +0000 UTC m=+1178.900023265" watchObservedRunningTime="2026-01-03 04:35:52.272976815 +0000 UTC m=+1179.390030040" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.276582 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.299624 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.454157 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.455379 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="dnsmasq-dns" containerID="cri-o://2ec2dd79dfd49370533ce9cb19388f56b241f86846a8fbfdc6c63274c5f97ab4" gracePeriod=10 Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.541751 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.800917 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.801925 4865 generic.go:334] "Generic (PLEG): container finished" podID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerID="2ec2dd79dfd49370533ce9cb19388f56b241f86846a8fbfdc6c63274c5f97ab4" exitCode=0 Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.802109 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" event={"ID":"60a739bd-909b-42c4-83a4-6003ebd5e9a6","Type":"ContainerDied","Data":"2ec2dd79dfd49370533ce9cb19388f56b241f86846a8fbfdc6c63274c5f97ab4"} Jan 03 04:35:52 crc kubenswrapper[4865]: I0103 04:35:52.873844 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.010563 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.103577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.103991 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.104050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.104520 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.104645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.104666 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6h74\" (UniqueName: \"kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74\") pod \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\" (UID: \"60a739bd-909b-42c4-83a4-6003ebd5e9a6\") " Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.117352 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74" (OuterVolumeSpecName: "kube-api-access-p6h74") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "kube-api-access-p6h74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.160410 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.163374 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.170031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.175363 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.184236 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config" (OuterVolumeSpecName: "config") pod "60a739bd-909b-42c4-83a4-6003ebd5e9a6" (UID: "60a739bd-909b-42c4-83a4-6003ebd5e9a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.206528 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.206750 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6h74\" (UniqueName: \"kubernetes.io/projected/60a739bd-909b-42c4-83a4-6003ebd5e9a6-kube-api-access-p6h74\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.206815 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.206881 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.206944 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.207000 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60a739bd-909b-42c4-83a4-6003ebd5e9a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.812199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerStarted","Data":"0c43deb92ec181f53301696c36b7c9df5d0ed7e6b794ccb4c2db17879ed96c7c"} Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.813991 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.816200 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.816199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-sp7qt" event={"ID":"60a739bd-909b-42c4-83a4-6003ebd5e9a6","Type":"ContainerDied","Data":"1237454bb25bfac7b170f750f92cf43eef35b75ab649df73f578ef5b868cf8a2"} Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.816327 4865 scope.go:117] "RemoveContainer" containerID="2ec2dd79dfd49370533ce9cb19388f56b241f86846a8fbfdc6c63274c5f97ab4" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.816503 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="cinder-scheduler" containerID="cri-o://c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7" gracePeriod=30 Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.816596 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="probe" containerID="cri-o://c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239" gracePeriod=30 Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.846886 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.071535801 podStartE2EDuration="5.846866596s" podCreationTimestamp="2026-01-03 04:35:48 +0000 UTC" firstStartedPulling="2026-01-03 04:35:48.884904397 +0000 UTC m=+1176.001957582" lastFinishedPulling="2026-01-03 04:35:52.660235192 +0000 UTC m=+1179.777288377" observedRunningTime="2026-01-03 04:35:53.840025621 +0000 UTC m=+1180.957078806" watchObservedRunningTime="2026-01-03 04:35:53.846866596 +0000 UTC m=+1180.963919771" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.849494 4865 scope.go:117] "RemoveContainer" containerID="d05bf9a32717fd6835cf6faa4f1ee079cfae0a97c0162646eda26516db11571a" Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.877737 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.887988 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-sp7qt"] Jan 03 04:35:53 crc kubenswrapper[4865]: I0103 04:35:53.986186 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:54 crc kubenswrapper[4865]: I0103 04:35:54.836908 4865 generic.go:334] "Generic (PLEG): container finished" podID="0a66951e-08cf-427d-9be0-016513057676" containerID="c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239" exitCode=0 Jan 03 04:35:54 crc kubenswrapper[4865]: I0103 04:35:54.838361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerDied","Data":"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239"} Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.174916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" path="/var/lib/kubelet/pods/60a739bd-909b-42c4-83a4-6003ebd5e9a6/volumes" Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.343179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5974c4f958-8d8f6" Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.421400 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.421662 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56cb66bf5b-6mg8w" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api-log" containerID="cri-o://43ec9ab91bafd73083557b98586e5bc4367c506e25f910505795aab4d4ae075e" gracePeriod=30 Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.421744 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56cb66bf5b-6mg8w" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api" containerID="cri-o://8a2751f0523a99ced43c0f54dd621c55679ffeeabfe6f783a357f125a6d0923f" gracePeriod=30 Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.848420 4865 generic.go:334] "Generic (PLEG): container finished" podID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerID="43ec9ab91bafd73083557b98586e5bc4367c506e25f910505795aab4d4ae075e" exitCode=143 Jan 03 04:35:55 crc kubenswrapper[4865]: I0103 04:35:55.848512 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerDied","Data":"43ec9ab91bafd73083557b98586e5bc4367c506e25f910505795aab4d4ae075e"} Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.083002 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b8664f56d-q48t7" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.393523 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.538724 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.539029 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.538844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.539188 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qlm\" (UniqueName: \"kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.539212 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.539318 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.539343 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom\") pod \"0a66951e-08cf-427d-9be0-016513057676\" (UID: \"0a66951e-08cf-427d-9be0-016513057676\") " Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.540326 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a66951e-08cf-427d-9be0-016513057676-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.563288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts" (OuterVolumeSpecName: "scripts") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.563441 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.565256 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm" (OuterVolumeSpecName: "kube-api-access-97qlm") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "kube-api-access-97qlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.612495 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56cb66bf5b-6mg8w" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:36312->10.217.0.160:9311: read: connection reset by peer" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.612577 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56cb66bf5b-6mg8w" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:36316->10.217.0.160:9311: read: connection reset by peer" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.630059 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.642034 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qlm\" (UniqueName: \"kubernetes.io/projected/0a66951e-08cf-427d-9be0-016513057676-kube-api-access-97qlm\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.642071 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.642085 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.642097 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.693287 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data" (OuterVolumeSpecName: "config-data") pod "0a66951e-08cf-427d-9be0-016513057676" (UID: "0a66951e-08cf-427d-9be0-016513057676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.746913 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a66951e-08cf-427d-9be0-016513057676-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.874566 4865 generic.go:334] "Generic (PLEG): container finished" podID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerID="8a2751f0523a99ced43c0f54dd621c55679ffeeabfe6f783a357f125a6d0923f" exitCode=0 Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.874638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerDied","Data":"8a2751f0523a99ced43c0f54dd621c55679ffeeabfe6f783a357f125a6d0923f"} Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.877010 4865 generic.go:334] "Generic (PLEG): container finished" podID="0a66951e-08cf-427d-9be0-016513057676" containerID="c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7" exitCode=0 Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.877045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerDied","Data":"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7"} Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.877072 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a66951e-08cf-427d-9be0-016513057676","Type":"ContainerDied","Data":"824ae6bece0810c62295628968ce2b6a15f65e73650c05fc5ed0c0444f7f0acd"} Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.877091 4865 scope.go:117] "RemoveContainer" containerID="c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.877233 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.911849 4865 scope.go:117] "RemoveContainer" containerID="c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.923373 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.937190 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.944088 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.948016 4865 scope.go:117] "RemoveContainer" containerID="c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.950838 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239\": container with ID starting with c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239 not found: ID does not exist" containerID="c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.950897 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239"} err="failed to get container status \"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239\": rpc error: code = NotFound desc = could not find container \"c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239\": container with ID starting with c7a2f94f9858f689734d51353b4890a1a659a95769c0a63b59303a33c25fa239 not found: ID does not exist" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.950929 4865 scope.go:117] "RemoveContainer" containerID="c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.951260 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7\": container with ID starting with c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7 not found: ID does not exist" containerID="c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.951291 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7"} err="failed to get container status \"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7\": rpc error: code = NotFound desc = could not find container \"c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7\": container with ID starting with c418480733f87e19faeb420ddabf4f1768f605cde5fca89352fa49a8676229f7 not found: ID does not exist" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.954756 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.955578 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="dnsmasq-dns" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955613 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="dnsmasq-dns" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.955643 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="init" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955650 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="init" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.955665 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="probe" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955671 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="probe" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.955685 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="cinder-scheduler" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955691 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="cinder-scheduler" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955953 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a739bd-909b-42c4-83a4-6003ebd5e9a6" containerName="dnsmasq-dns" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955974 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="probe" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.955986 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api-log" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.956002 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a66951e-08cf-427d-9be0-016513057676" containerName="cinder-scheduler" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.956014 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.956240 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api-log" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.956261 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api-log" Jan 03 04:35:58 crc kubenswrapper[4865]: E0103 04:35:58.956275 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.956281 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" containerName="barbican-api" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.956962 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.957042 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:58 crc kubenswrapper[4865]: I0103 04:35:58.958946 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.152519 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data\") pod \"e920a1ff-4b51-4260-948c-2eaa5308498a\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.153802 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle\") pod \"e920a1ff-4b51-4260-948c-2eaa5308498a\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.154145 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g4d8\" (UniqueName: \"kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8\") pod \"e920a1ff-4b51-4260-948c-2eaa5308498a\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.154323 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs\") pod \"e920a1ff-4b51-4260-948c-2eaa5308498a\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.164315 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom\") pod \"e920a1ff-4b51-4260-948c-2eaa5308498a\" (UID: \"e920a1ff-4b51-4260-948c-2eaa5308498a\") " Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.156120 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs" (OuterVolumeSpecName: "logs") pod "e920a1ff-4b51-4260-948c-2eaa5308498a" (UID: "e920a1ff-4b51-4260-948c-2eaa5308498a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.160263 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8" (OuterVolumeSpecName: "kube-api-access-8g4d8") pod "e920a1ff-4b51-4260-948c-2eaa5308498a" (UID: "e920a1ff-4b51-4260-948c-2eaa5308498a"). InnerVolumeSpecName "kube-api-access-8g4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.166748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.167065 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.167338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4hf\" (UniqueName: \"kubernetes.io/projected/0bfb3310-1647-4ce9-887c-ccff650d42c5-kube-api-access-fg4hf\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.167588 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.167777 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.167973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bfb3310-1647-4ce9-887c-ccff650d42c5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.168250 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g4d8\" (UniqueName: \"kubernetes.io/projected/e920a1ff-4b51-4260-948c-2eaa5308498a-kube-api-access-8g4d8\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.168374 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e920a1ff-4b51-4260-948c-2eaa5308498a-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.171690 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a66951e-08cf-427d-9be0-016513057676" path="/var/lib/kubelet/pods/0a66951e-08cf-427d-9be0-016513057676/volumes" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.173841 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e920a1ff-4b51-4260-948c-2eaa5308498a" (UID: "e920a1ff-4b51-4260-948c-2eaa5308498a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.195480 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e920a1ff-4b51-4260-948c-2eaa5308498a" (UID: "e920a1ff-4b51-4260-948c-2eaa5308498a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.226021 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data" (OuterVolumeSpecName: "config-data") pod "e920a1ff-4b51-4260-948c-2eaa5308498a" (UID: "e920a1ff-4b51-4260-948c-2eaa5308498a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270102 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270166 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4hf\" (UniqueName: \"kubernetes.io/projected/0bfb3310-1647-4ce9-887c-ccff650d42c5-kube-api-access-fg4hf\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270241 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270267 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270285 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bfb3310-1647-4ce9-887c-ccff650d42c5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270341 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270354 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270367 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920a1ff-4b51-4260-948c-2eaa5308498a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.270426 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bfb3310-1647-4ce9-887c-ccff650d42c5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.275207 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.275613 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.276063 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.276790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb3310-1647-4ce9-887c-ccff650d42c5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.288179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4hf\" (UniqueName: \"kubernetes.io/projected/0bfb3310-1647-4ce9-887c-ccff650d42c5-kube-api-access-fg4hf\") pod \"cinder-scheduler-0\" (UID: \"0bfb3310-1647-4ce9-887c-ccff650d42c5\") " pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.289321 4865 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7336705c-df3f-4630-8897-d1ab023d13e6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7336705c-df3f-4630-8897-d1ab023d13e6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7336705c_df3f_4630_8897_d1ab023d13e6.slice" Jan 03 04:35:59 crc kubenswrapper[4865]: E0103 04:35:59.289362 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod7336705c-df3f-4630-8897-d1ab023d13e6] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod7336705c-df3f-4630-8897-d1ab023d13e6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7336705c_df3f_4630_8897_d1ab023d13e6.slice" pod="openstack/glance-default-external-api-0" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.573512 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.892190 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.892309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56cb66bf5b-6mg8w" event={"ID":"e920a1ff-4b51-4260-948c-2eaa5308498a","Type":"ContainerDied","Data":"4640ad75ff61ba17e2f22859f8ba1290e2ce8ab9b7d4150e51dceaeae898d5df"} Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.892436 4865 scope.go:117] "RemoveContainer" containerID="8a2751f0523a99ced43c0f54dd621c55679ffeeabfe6f783a357f125a6d0923f" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.892764 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56cb66bf5b-6mg8w" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.915542 4865 scope.go:117] "RemoveContainer" containerID="43ec9ab91bafd73083557b98586e5bc4367c506e25f910505795aab4d4ae075e" Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.962721 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.972628 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.985017 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.992554 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56cb66bf5b-6mg8w"] Jan 03 04:35:59 crc kubenswrapper[4865]: I0103 04:35:59.999464 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.002113 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.005801 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.006069 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.007892 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.040104 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.080395 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086120 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086180 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086210 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086236 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086268 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086285 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mbh\" (UniqueName: \"kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.086408 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.187908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66mbh\" (UniqueName: \"kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188266 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188291 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188340 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.188364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.191937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.192836 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.193714 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.200097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.201029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.203913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.209357 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66mbh\" (UniqueName: \"kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.223707 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.413939 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.921242 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bfb3310-1647-4ce9-887c-ccff650d42c5","Type":"ContainerStarted","Data":"19aac0cbd2c1e7c5755d9c3c7e526e09ab4878783b86a1a0830d0bf8a5b9f56e"} Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.921681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bfb3310-1647-4ce9-887c-ccff650d42c5","Type":"ContainerStarted","Data":"857e6f88ab4ca59608366c002432ab1f883a08f11106b6be8ae86e8d39be70ba"} Jan 03 04:36:00 crc kubenswrapper[4865]: I0103 04:36:00.956848 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:00 crc kubenswrapper[4865]: W0103 04:36:00.969750 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod990d694e_66b0_4fdc_b826_9e0149853b25.slice/crio-6da31f6b8f4d0e34750492838927ba05446c5ddf52d3441605f775f66593ca0c WatchSource:0}: Error finding container 6da31f6b8f4d0e34750492838927ba05446c5ddf52d3441605f775f66593ca0c: Status 404 returned error can't find the container with id 6da31f6b8f4d0e34750492838927ba05446c5ddf52d3441605f775f66593ca0c Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.043269 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.168161 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7336705c-df3f-4630-8897-d1ab023d13e6" path="/var/lib/kubelet/pods/7336705c-df3f-4630-8897-d1ab023d13e6/volumes" Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.168985 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e920a1ff-4b51-4260-948c-2eaa5308498a" path="/var/lib/kubelet/pods/e920a1ff-4b51-4260-948c-2eaa5308498a/volumes" Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.941996 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bfb3310-1647-4ce9-887c-ccff650d42c5","Type":"ContainerStarted","Data":"e3b7aaeaec361aa2eb32a401388840ce464ef082142cb39faafe6aef93991a5e"} Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.968126 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerStarted","Data":"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524"} Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.968166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerStarted","Data":"6da31f6b8f4d0e34750492838927ba05446c5ddf52d3441605f775f66593ca0c"} Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.971320 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:36:01 crc kubenswrapper[4865]: I0103 04:36:01.971436 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.971415817 podStartE2EDuration="3.971415817s" podCreationTimestamp="2026-01-03 04:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:01.967493732 +0000 UTC m=+1189.084546917" watchObservedRunningTime="2026-01-03 04:36:01.971415817 +0000 UTC m=+1189.088469012" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.015079 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.028024 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.031015 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.031059 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.031140 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xktp8" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.037959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config-secret\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.038002 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.038050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzml\" (UniqueName: \"kubernetes.io/projected/a3ac055f-a850-4676-8bc2-0cd50509ff30-kube-api-access-frzml\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.038148 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.043464 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.139405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.139641 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config-secret\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.140147 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.140210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frzml\" (UniqueName: \"kubernetes.io/projected/a3ac055f-a850-4676-8bc2-0cd50509ff30-kube-api-access-frzml\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.141236 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.166922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frzml\" (UniqueName: \"kubernetes.io/projected/a3ac055f-a850-4676-8bc2-0cd50509ff30-kube-api-access-frzml\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.170775 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-openstack-config-secret\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.171008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac055f-a850-4676-8bc2-0cd50509ff30-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a3ac055f-a850-4676-8bc2-0cd50509ff30\") " pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.355464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 03 04:36:02 crc kubenswrapper[4865]: W0103 04:36:02.949698 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ac055f_a850_4676_8bc2_0cd50509ff30.slice/crio-53fc5b93842d2ccd6975f3e8652bd15a0207428e29aa1941fc469b1fb5740bcf WatchSource:0}: Error finding container 53fc5b93842d2ccd6975f3e8652bd15a0207428e29aa1941fc469b1fb5740bcf: Status 404 returned error can't find the container with id 53fc5b93842d2ccd6975f3e8652bd15a0207428e29aa1941fc469b1fb5740bcf Jan 03 04:36:02 crc kubenswrapper[4865]: I0103 04:36:02.958493 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 03 04:36:03 crc kubenswrapper[4865]: I0103 04:36:03.008473 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerStarted","Data":"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9"} Jan 03 04:36:03 crc kubenswrapper[4865]: I0103 04:36:03.013820 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a3ac055f-a850-4676-8bc2-0cd50509ff30","Type":"ContainerStarted","Data":"53fc5b93842d2ccd6975f3e8652bd15a0207428e29aa1941fc469b1fb5740bcf"} Jan 03 04:36:03 crc kubenswrapper[4865]: I0103 04:36:03.041419 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.041398859 podStartE2EDuration="4.041398859s" podCreationTimestamp="2026-01-03 04:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:03.034445961 +0000 UTC m=+1190.151499156" watchObservedRunningTime="2026-01-03 04:36:03.041398859 +0000 UTC m=+1190.158452044" Jan 03 04:36:04 crc kubenswrapper[4865]: I0103 04:36:04.315052 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b946bd96f-ph9x6" Jan 03 04:36:04 crc kubenswrapper[4865]: I0103 04:36:04.382048 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:36:04 crc kubenswrapper[4865]: I0103 04:36:04.382326 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66b4884f8d-ksw2z" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-api" containerID="cri-o://694b1e16a154f9a6f3c0e0680b8b829ac7548cf807ecae71fedd48daf6040f07" gracePeriod=30 Jan 03 04:36:04 crc kubenswrapper[4865]: I0103 04:36:04.382409 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66b4884f8d-ksw2z" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-httpd" containerID="cri-o://ac07760e3fcc67b9bc34a826fec51c89ba07084899d8c4b104cbc9faf033e78b" gracePeriod=30 Jan 03 04:36:04 crc kubenswrapper[4865]: I0103 04:36:04.574427 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.041983 4865 generic.go:334] "Generic (PLEG): container finished" podID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerID="ac07760e3fcc67b9bc34a826fec51c89ba07084899d8c4b104cbc9faf033e78b" exitCode=0 Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.042106 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerDied","Data":"ac07760e3fcc67b9bc34a826fec51c89ba07084899d8c4b104cbc9faf033e78b"} Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.676201 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5496856655-kc92p"] Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.681055 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.684222 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.684995 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.685116 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.688059 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5496856655-kc92p"] Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-log-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810455 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-internal-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-config-data\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-public-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810584 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-combined-ca-bundle\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810609 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htq4v\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-kube-api-access-htq4v\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-run-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.810701 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-etc-swift\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-run-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912284 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-etc-swift\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912441 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-log-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-internal-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-config-data\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912594 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-public-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-combined-ca-bundle\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912644 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htq4v\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-kube-api-access-htq4v\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.912727 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-run-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.913022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/638adfee-76ef-47db-bd03-1dbffb050ac8-log-httpd\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.919254 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-config-data\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.920131 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-combined-ca-bundle\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.929250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-etc-swift\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.930880 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htq4v\" (UniqueName: \"kubernetes.io/projected/638adfee-76ef-47db-bd03-1dbffb050ac8-kube-api-access-htq4v\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.935073 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-internal-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.943527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/638adfee-76ef-47db-bd03-1dbffb050ac8-public-tls-certs\") pod \"swift-proxy-5496856655-kc92p\" (UID: \"638adfee-76ef-47db-bd03-1dbffb050ac8\") " pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:05 crc kubenswrapper[4865]: I0103 04:36:05.996726 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:06 crc kubenswrapper[4865]: I0103 04:36:06.591886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5496856655-kc92p"] Jan 03 04:36:06 crc kubenswrapper[4865]: I0103 04:36:06.920510 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.066882 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5496856655-kc92p" event={"ID":"638adfee-76ef-47db-bd03-1dbffb050ac8","Type":"ContainerStarted","Data":"880994b9c8cd2d631e9623a881ba8f4021ca018a4ffebc1d2dc1f00f0f1ff715"} Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.066936 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5496856655-kc92p" event={"ID":"638adfee-76ef-47db-bd03-1dbffb050ac8","Type":"ContainerStarted","Data":"30e424f1325e8a7e16880519db93789ef89c29886adb549a86883bfbf60dee5a"} Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.066955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5496856655-kc92p" event={"ID":"638adfee-76ef-47db-bd03-1dbffb050ac8","Type":"ContainerStarted","Data":"bbe676140c66b361f886a7f6523835ceb2d85cc0ac0f93056caec2e6be4d1f86"} Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.067132 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.067200 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.073100 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-677444457b-ftr4x" Jan 03 04:36:07 crc kubenswrapper[4865]: I0103 04:36:07.088063 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5496856655-kc92p" podStartSLOduration=2.088045337 podStartE2EDuration="2.088045337s" podCreationTimestamp="2026-01-03 04:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:07.084192343 +0000 UTC m=+1194.201245538" watchObservedRunningTime="2026-01-03 04:36:07.088045337 +0000 UTC m=+1194.205098532" Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.376495 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.377652 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-central-agent" containerID="cri-o://410f82eb55954cdac6aea54dc914d51106839f1938e901ee09344c39a67eb370" gracePeriod=30 Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.377741 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="sg-core" containerID="cri-o://2365eb41d15fcfaa004c56e16bff6d7d0f87c0bfee23b0738131f319022b5e03" gracePeriod=30 Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.377781 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-notification-agent" containerID="cri-o://699d7f3e3d36ad24cf1da27f839436542f4e85ad4481784617973f2fc7317aea" gracePeriod=30 Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.377700 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="proxy-httpd" containerID="cri-o://0c43deb92ec181f53301696c36b7c9df5d0ed7e6b794ccb4c2db17879ed96c7c" gracePeriod=30 Jan 03 04:36:08 crc kubenswrapper[4865]: I0103 04:36:08.381933 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": read tcp 10.217.0.2:51712->10.217.0.165:3000: read: connection reset by peer" Jan 03 04:36:09 crc kubenswrapper[4865]: I0103 04:36:09.780198 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.080198 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cc9469fc6-wdk7w" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.080361 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106131 4865 generic.go:334] "Generic (PLEG): container finished" podID="270f44f6-3136-45cd-8c79-08c89bda5409" containerID="0c43deb92ec181f53301696c36b7c9df5d0ed7e6b794ccb4c2db17879ed96c7c" exitCode=0 Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106172 4865 generic.go:334] "Generic (PLEG): container finished" podID="270f44f6-3136-45cd-8c79-08c89bda5409" containerID="2365eb41d15fcfaa004c56e16bff6d7d0f87c0bfee23b0738131f319022b5e03" exitCode=2 Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106183 4865 generic.go:334] "Generic (PLEG): container finished" podID="270f44f6-3136-45cd-8c79-08c89bda5409" containerID="410f82eb55954cdac6aea54dc914d51106839f1938e901ee09344c39a67eb370" exitCode=0 Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106210 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerDied","Data":"0c43deb92ec181f53301696c36b7c9df5d0ed7e6b794ccb4c2db17879ed96c7c"} Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106243 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerDied","Data":"2365eb41d15fcfaa004c56e16bff6d7d0f87c0bfee23b0738131f319022b5e03"} Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.106256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerDied","Data":"410f82eb55954cdac6aea54dc914d51106839f1938e901ee09344c39a67eb370"} Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.415165 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.415224 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.453294 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.464233 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.740169 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:36:10 crc kubenswrapper[4865]: I0103 04:36:10.740250 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:36:11 crc kubenswrapper[4865]: I0103 04:36:11.113177 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 04:36:11 crc kubenswrapper[4865]: I0103 04:36:11.113244 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.140796 4865 generic.go:334] "Generic (PLEG): container finished" podID="270f44f6-3136-45cd-8c79-08c89bda5409" containerID="699d7f3e3d36ad24cf1da27f839436542f4e85ad4481784617973f2fc7317aea" exitCode=0 Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.140876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerDied","Data":"699d7f3e3d36ad24cf1da27f839436542f4e85ad4481784617973f2fc7317aea"} Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.302996 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.880609 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.969785 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.973774 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.974024 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-log" containerID="cri-o://d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0" gracePeriod=30 Jan 03 04:36:12 crc kubenswrapper[4865]: I0103 04:36:12.974164 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-httpd" containerID="cri-o://a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8" gracePeriod=30 Jan 03 04:36:13 crc kubenswrapper[4865]: I0103 04:36:13.160846 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerID="d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0" exitCode=143 Jan 03 04:36:13 crc kubenswrapper[4865]: I0103 04:36:13.169859 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerDied","Data":"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0"} Jan 03 04:36:14 crc kubenswrapper[4865]: I0103 04:36:14.169993 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-log" containerID="cri-o://fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524" gracePeriod=30 Jan 03 04:36:14 crc kubenswrapper[4865]: I0103 04:36:14.170146 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-httpd" containerID="cri-o://511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9" gracePeriod=30 Jan 03 04:36:14 crc kubenswrapper[4865]: I0103 04:36:14.176656 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Jan 03 04:36:14 crc kubenswrapper[4865]: I0103 04:36:14.177189 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Jan 03 04:36:15 crc kubenswrapper[4865]: I0103 04:36:15.182480 4865 generic.go:334] "Generic (PLEG): container finished" podID="990d694e-66b0-4fdc-b826-9e0149853b25" containerID="fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524" exitCode=143 Jan 03 04:36:15 crc kubenswrapper[4865]: I0103 04:36:15.182564 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerDied","Data":"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524"} Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.001829 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.003348 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5496856655-kc92p" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.235194 4865 generic.go:334] "Generic (PLEG): container finished" podID="70339b26-8f06-4fe7-821e-cc376084eace" containerID="90c28fa2ba55e8b0146141174ba280b6b86aa0c44d4d082839188cf483036d50" exitCode=137 Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.236485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerDied","Data":"90c28fa2ba55e8b0146141174ba280b6b86aa0c44d4d082839188cf483036d50"} Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.550759 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.573588 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690501 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690595 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690628 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690673 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690695 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56zj\" (UniqueName: \"kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690749 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690766 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxv8w\" (UniqueName: \"kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690915 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key\") pod \"70339b26-8f06-4fe7-821e-cc376084eace\" (UID: \"70339b26-8f06-4fe7-821e-cc376084eace\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690943 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.690968 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data\") pod \"270f44f6-3136-45cd-8c79-08c89bda5409\" (UID: \"270f44f6-3136-45cd-8c79-08c89bda5409\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.691443 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs" (OuterVolumeSpecName: "logs") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.696107 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.696296 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts" (OuterVolumeSpecName: "scripts") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.699565 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj" (OuterVolumeSpecName: "kube-api-access-l56zj") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "kube-api-access-l56zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.701714 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w" (OuterVolumeSpecName: "kube-api-access-zxv8w") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "kube-api-access-zxv8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.702125 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.718627 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.734363 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data" (OuterVolumeSpecName: "config-data") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.734358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts" (OuterVolumeSpecName: "scripts") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.741319 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.744839 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.771288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "70339b26-8f06-4fe7-821e-cc376084eace" (UID: "70339b26-8f06-4fe7-821e-cc376084eace"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.779063 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.783800 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793855 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793878 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793886 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793896 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793905 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793913 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56zj\" (UniqueName: \"kubernetes.io/projected/70339b26-8f06-4fe7-821e-cc376084eace-kube-api-access-l56zj\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793922 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/270f44f6-3136-45cd-8c79-08c89bda5409-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793930 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793938 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70339b26-8f06-4fe7-821e-cc376084eace-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793945 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793953 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxv8w\" (UniqueName: \"kubernetes.io/projected/270f44f6-3136-45cd-8c79-08c89bda5409-kube-api-access-zxv8w\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793963 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70339b26-8f06-4fe7-821e-cc376084eace-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.793971 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70339b26-8f06-4fe7-821e-cc376084eace-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.800260 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data" (OuterVolumeSpecName: "config-data") pod "270f44f6-3136-45cd-8c79-08c89bda5409" (UID: "270f44f6-3136-45cd-8c79-08c89bda5409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895601 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895662 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895718 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895741 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895761 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895815 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbl5\" (UniqueName: \"kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895885 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.895950 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs\") pod \"fd307a63-8983-4a93-9c7f-e961c5eb6620\" (UID: \"fd307a63-8983-4a93-9c7f-e961c5eb6620\") " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.896261 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f44f6-3136-45cd-8c79-08c89bda5409-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.896559 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs" (OuterVolumeSpecName: "logs") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.900310 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts" (OuterVolumeSpecName: "scripts") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.900456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.914238 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5" (OuterVolumeSpecName: "kube-api-access-5jbl5") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "kube-api-access-5jbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.916391 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.926518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.943560 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.948121 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data" (OuterVolumeSpecName: "config-data") pod "fd307a63-8983-4a93-9c7f-e961c5eb6620" (UID: "fd307a63-8983-4a93-9c7f-e961c5eb6620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.997972 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998014 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd307a63-8983-4a93-9c7f-e961c5eb6620-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998028 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998066 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998080 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998097 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998109 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd307a63-8983-4a93-9c7f-e961c5eb6620-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:16 crc kubenswrapper[4865]: I0103 04:36:16.998121 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jbl5\" (UniqueName: \"kubernetes.io/projected/fd307a63-8983-4a93-9c7f-e961c5eb6620-kube-api-access-5jbl5\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.024718 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.100298 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.248522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"270f44f6-3136-45cd-8c79-08c89bda5409","Type":"ContainerDied","Data":"0ef16394fb4daa5db20bd96779e589e0b600e08e89269c9631e6bf49af876abf"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.248593 4865 scope.go:117] "RemoveContainer" containerID="0c43deb92ec181f53301696c36b7c9df5d0ed7e6b794ccb4c2db17879ed96c7c" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.248771 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.256644 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerID="a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8" exitCode=0 Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.256896 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.257472 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerDied","Data":"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.257609 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd307a63-8983-4a93-9c7f-e961c5eb6620","Type":"ContainerDied","Data":"9b05b59c31c25a9d9b55ecc1676bb384d41a35914f6f63b3b8456d2a97e125f6"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.260197 4865 generic.go:334] "Generic (PLEG): container finished" podID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerID="694b1e16a154f9a6f3c0e0680b8b829ac7548cf807ecae71fedd48daf6040f07" exitCode=0 Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.260466 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerDied","Data":"694b1e16a154f9a6f3c0e0680b8b829ac7548cf807ecae71fedd48daf6040f07"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.271171 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc9469fc6-wdk7w" event={"ID":"70339b26-8f06-4fe7-821e-cc376084eace","Type":"ContainerDied","Data":"35674883e25d50aaaa16d502603e87c61c88a491365c770f299e6c23495f4259"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.271262 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc9469fc6-wdk7w" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.273222 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a3ac055f-a850-4676-8bc2-0cd50509ff30","Type":"ContainerStarted","Data":"5b4382f7f0bcc9afb651a656f8b6197b44b677ee158fadfcc6f726d1dd0b6420"} Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.278822 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.279137 4865 scope.go:117] "RemoveContainer" containerID="2365eb41d15fcfaa004c56e16bff6d7d0f87c0bfee23b0738131f319022b5e03" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.286873 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.299922 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.310085 4865 scope.go:117] "RemoveContainer" containerID="699d7f3e3d36ad24cf1da27f839436542f4e85ad4481784617973f2fc7317aea" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.310847 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321019 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321468 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-central-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321488 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-central-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321508 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-notification-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321516 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-notification-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321529 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321535 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321552 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-log" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321559 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-log" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321574 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321581 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321595 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="sg-core" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321603 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="sg-core" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321617 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon-log" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321624 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon-log" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.321637 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="proxy-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321644 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="proxy-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321834 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-central-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321851 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321861 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="ceilometer-notification-agent" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321879 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321892 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="70339b26-8f06-4fe7-821e-cc376084eace" containerName="horizon-log" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321906 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="proxy-httpd" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321922 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" containerName="sg-core" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.321930 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" containerName="glance-log" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.323847 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.325642 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.326871 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.338299 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.348540 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cc9469fc6-wdk7w"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.352658 4865 scope.go:117] "RemoveContainer" containerID="410f82eb55954cdac6aea54dc914d51106839f1938e901ee09344c39a67eb370" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.365772 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.376325 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.379496 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.964321334 podStartE2EDuration="16.379479938s" podCreationTimestamp="2026-01-03 04:36:01 +0000 UTC" firstStartedPulling="2026-01-03 04:36:02.952190227 +0000 UTC m=+1190.069243402" lastFinishedPulling="2026-01-03 04:36:16.367348821 +0000 UTC m=+1203.484402006" observedRunningTime="2026-01-03 04:36:17.330933666 +0000 UTC m=+1204.447986851" watchObservedRunningTime="2026-01-03 04:36:17.379479938 +0000 UTC m=+1204.496533123" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.380428 4865 scope.go:117] "RemoveContainer" containerID="a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.383264 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.386020 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.386290 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.391591 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.410740 4865 scope.go:117] "RemoveContainer" containerID="d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.427481 4865 scope.go:117] "RemoveContainer" containerID="a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.428085 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8\": container with ID starting with a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8 not found: ID does not exist" containerID="a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.428141 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8"} err="failed to get container status \"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8\": rpc error: code = NotFound desc = could not find container \"a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8\": container with ID starting with a9dda6731869b6b4f7c75251a6d6f9bbf8d28c2e16cb262c3fedf3c706c997d8 not found: ID does not exist" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.428175 4865 scope.go:117] "RemoveContainer" containerID="d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0" Jan 03 04:36:17 crc kubenswrapper[4865]: E0103 04:36:17.428514 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0\": container with ID starting with d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0 not found: ID does not exist" containerID="d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.428610 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0"} err="failed to get container status \"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0\": rpc error: code = NotFound desc = could not find container \"d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0\": container with ID starting with d0c0f20aa5cd3e75cb9f03ce8b42a13e0e863362d9c17d22d704aaa7567236f0 not found: ID does not exist" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.429648 4865 scope.go:117] "RemoveContainer" containerID="e8c51f3df0fa17191585fbd98de1dd756c10077859e51459e1c8c79c4ab744ad" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.507757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.507835 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.507873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mjz\" (UniqueName: \"kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.507903 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.507970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508052 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508087 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508108 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508148 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508169 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtqt\" (UniqueName: \"kubernetes.io/projected/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-kube-api-access-5vtqt\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.508307 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.599074 4865 scope.go:117] "RemoveContainer" containerID="90c28fa2ba55e8b0146141174ba280b6b86aa0c44d4d082839188cf483036d50" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612486 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612728 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612835 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.612939 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613039 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtqt\" (UniqueName: \"kubernetes.io/projected/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-kube-api-access-5vtqt\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mjz\" (UniqueName: \"kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613890 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-logs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613273 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.613575 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.614291 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.614434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.617003 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.617726 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.619913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.622019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.626922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.628502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.630030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.630920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mjz\" (UniqueName: \"kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.631221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data\") pod \"ceilometer-0\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.632171 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtqt\" (UniqueName: \"kubernetes.io/projected/75b65689-ffa6-4b7c-b6c2-2f8e48f4a333-kube-api-access-5vtqt\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.653283 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333\") " pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.653830 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.703170 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:17 crc kubenswrapper[4865]: I0103 04:36:17.845015 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.024349 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs\") pod \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.024487 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config\") pod \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.024602 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle\") pod \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.024639 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config\") pod \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.024679 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn26n\" (UniqueName: \"kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n\") pod \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\" (UID: \"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f\") " Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.034460 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n" (OuterVolumeSpecName: "kube-api-access-kn26n") pod "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" (UID: "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f"). InnerVolumeSpecName "kube-api-access-kn26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.037510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" (UID: "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.085321 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" (UID: "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.095158 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config" (OuterVolumeSpecName: "config") pod "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" (UID: "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.115463 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" (UID: "3d48a8a3-a23b-4ebd-862d-c95ec0bf070f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.128319 4865 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.128519 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.128539 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.128562 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.128580 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn26n\" (UniqueName: \"kubernetes.io/projected/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f-kube-api-access-kn26n\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.140912 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.286634 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b4884f8d-ksw2z" event={"ID":"3d48a8a3-a23b-4ebd-862d-c95ec0bf070f","Type":"ContainerDied","Data":"551a10e4d3dab9016546dcfa4dc0f234c010bb8cf75e348f76cdfd432ef0b10a"} Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.286683 4865 scope.go:117] "RemoveContainer" containerID="ac07760e3fcc67b9bc34a826fec51c89ba07084899d8c4b104cbc9faf033e78b" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.286795 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b4884f8d-ksw2z" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.297630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerStarted","Data":"16f10f0caafa5594571e5d898c970a1ec4f911d445e6b65495140e512dc77afb"} Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.328790 4865 scope.go:117] "RemoveContainer" containerID="694b1e16a154f9a6f3c0e0680b8b829ac7548cf807ecae71fedd48daf6040f07" Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.330661 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.345742 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66b4884f8d-ksw2z"] Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.379437 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 03 04:36:18 crc kubenswrapper[4865]: I0103 04:36:18.654591 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.168607 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270f44f6-3136-45cd-8c79-08c89bda5409" path="/var/lib/kubelet/pods/270f44f6-3136-45cd-8c79-08c89bda5409/volumes" Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.169945 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" path="/var/lib/kubelet/pods/3d48a8a3-a23b-4ebd-862d-c95ec0bf070f/volumes" Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.171950 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70339b26-8f06-4fe7-821e-cc376084eace" path="/var/lib/kubelet/pods/70339b26-8f06-4fe7-821e-cc376084eace/volumes" Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.172813 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd307a63-8983-4a93-9c7f-e961c5eb6620" path="/var/lib/kubelet/pods/fd307a63-8983-4a93-9c7f-e961c5eb6620/volumes" Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.312287 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333","Type":"ContainerStarted","Data":"9d70e49f9e7c1c542df17396d2288f70095e096753f055aae4bda95a4d7b6b69"} Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.312614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333","Type":"ContainerStarted","Data":"3288895eb181fefed98b480575d24fdcfcc2dbc5bd8e1369eacf36c87df39177"} Jan 03 04:36:19 crc kubenswrapper[4865]: I0103 04:36:19.315249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerStarted","Data":"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116"} Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.285691 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.345877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerStarted","Data":"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549"} Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.348666 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75b65689-ffa6-4b7c-b6c2-2f8e48f4a333","Type":"ContainerStarted","Data":"9ba416e68cf85f354dacfc35f3fb48531526ef3f634b9e2ac98f8a49180823b5"} Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.360940 4865 generic.go:334] "Generic (PLEG): container finished" podID="990d694e-66b0-4fdc-b826-9e0149853b25" containerID="511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9" exitCode=0 Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.360987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerDied","Data":"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9"} Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.361012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"990d694e-66b0-4fdc-b826-9e0149853b25","Type":"ContainerDied","Data":"6da31f6b8f4d0e34750492838927ba05446c5ddf52d3441605f775f66593ca0c"} Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.361027 4865 scope.go:117] "RemoveContainer" containerID="511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.361163 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.379230 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.3792084190000002 podStartE2EDuration="3.379208419s" podCreationTimestamp="2026-01-03 04:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:20.369160717 +0000 UTC m=+1207.486213902" watchObservedRunningTime="2026-01-03 04:36:20.379208419 +0000 UTC m=+1207.496261604" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.392527 4865 scope.go:117] "RemoveContainer" containerID="fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.414776 4865 scope.go:117] "RemoveContainer" containerID="511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9" Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.415195 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9\": container with ID starting with 511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9 not found: ID does not exist" containerID="511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.415225 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9"} err="failed to get container status \"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9\": rpc error: code = NotFound desc = could not find container \"511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9\": container with ID starting with 511bbb64e7faa694f26979d3c80a80cccb0124a7bfa15725a2f9e4cd08a8d0b9 not found: ID does not exist" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.415246 4865 scope.go:117] "RemoveContainer" containerID="fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524" Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.415440 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524\": container with ID starting with fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524 not found: ID does not exist" containerID="fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.415472 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524"} err="failed to get container status \"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524\": rpc error: code = NotFound desc = could not find container \"fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524\": container with ID starting with fa269e25e9bcf8eb6e12c88e2e8a8f1ec64f7a9a9d2ee8ef70de139b8f283524 not found: ID does not exist" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480337 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480463 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480573 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480705 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480785 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.480865 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66mbh\" (UniqueName: \"kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh\") pod \"990d694e-66b0-4fdc-b826-9e0149853b25\" (UID: \"990d694e-66b0-4fdc-b826-9e0149853b25\") " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.481697 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.482351 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs" (OuterVolumeSpecName: "logs") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.490735 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.490744 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh" (OuterVolumeSpecName: "kube-api-access-66mbh") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "kube-api-access-66mbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.518925 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts" (OuterVolumeSpecName: "scripts") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.535570 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.573173 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589116 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66mbh\" (UniqueName: \"kubernetes.io/projected/990d694e-66b0-4fdc-b826-9e0149853b25-kube-api-access-66mbh\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589148 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589161 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589170 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589178 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589200 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.589211 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d694e-66b0-4fdc-b826-9e0149853b25-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.609619 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data" (OuterVolumeSpecName: "config-data") pod "990d694e-66b0-4fdc-b826-9e0149853b25" (UID: "990d694e-66b0-4fdc-b826-9e0149853b25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.615821 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-phgms"] Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.616923 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.616952 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.616967 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-log" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.616975 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-log" Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.616987 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-api" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.616992 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-api" Jan 03 04:36:20 crc kubenswrapper[4865]: E0103 04:36:20.617005 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617011 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617176 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-api" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617193 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617207 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d48a8a3-a23b-4ebd-862d-c95ec0bf070f" containerName="neutron-httpd" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617217 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" containerName="glance-log" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.617789 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.630311 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-phgms"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.636501 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.702667 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990d694e-66b0-4fdc-b826-9e0149853b25-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.702771 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.712194 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5tjfp"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.714556 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.727253 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5tjfp"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.746999 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b0b1-account-create-update-mlj6w"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.748728 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.751899 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.775414 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b0b1-account-create-update-mlj6w"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.783617 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.792887 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.805276 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.805445 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbdl\" (UniqueName: \"kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.824436 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.825937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.829568 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.833138 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.833589 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.862453 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4wmtx"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.863908 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.873134 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4wmtx"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909613 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trbdl\" (UniqueName: \"kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909660 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6m6\" (UniqueName: \"kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909830 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909855 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909874 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47js2\" (UniqueName: \"kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.909943 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbgb\" (UniqueName: \"kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.910861 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.913014 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1d07-account-create-update-m8fgm"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.914273 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.917878 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.919875 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1d07-account-create-update-m8fgm"] Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.934131 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trbdl\" (UniqueName: \"kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl\") pod \"nova-api-db-create-phgms\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:20 crc kubenswrapper[4865]: I0103 04:36:20.946843 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011468 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-config-data\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011516 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6m6\" (UniqueName: \"kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011558 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011722 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-scripts\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011773 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011837 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011861 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-logs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011916 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47js2\" (UniqueName: \"kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.011968 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.012065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnbgb\" (UniqueName: \"kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.012113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgs9b\" (UniqueName: \"kubernetes.io/projected/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-kube-api-access-sgs9b\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.012701 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.013157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.015688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.028214 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6m6\" (UniqueName: \"kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6\") pod \"nova-cell1-db-create-4wmtx\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.029011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnbgb\" (UniqueName: \"kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb\") pod \"nova-cell0-db-create-5tjfp\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.033278 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47js2\" (UniqueName: \"kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2\") pod \"nova-api-b0b1-account-create-update-mlj6w\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.052748 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.078459 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.106747 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db47-account-create-update-9bncr"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.107816 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.112973 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114511 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5l4\" (UniqueName: \"kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-scripts\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114669 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-logs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.114721 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.117608 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-logs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.117697 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.118233 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.119073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgs9b\" (UniqueName: \"kubernetes.io/projected/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-kube-api-access-sgs9b\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.119215 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.119284 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.119311 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-config-data\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.123797 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db47-account-create-update-9bncr"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.126125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.127221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-scripts\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.135680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-config-data\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.137182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.145365 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgs9b\" (UniqueName: \"kubernetes.io/projected/879b9de0-9d7c-46dc-b9b1-80c16cbebaa0-kube-api-access-sgs9b\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.188136 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.188529 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990d694e-66b0-4fdc-b826-9e0149853b25" path="/var/lib/kubelet/pods/990d694e-66b0-4fdc-b826-9e0149853b25/volumes" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.192271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0\") " pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.222432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.222513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8ws\" (UniqueName: \"kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.222537 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5l4\" (UniqueName: \"kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.222569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.223526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.246479 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5l4\" (UniqueName: \"kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4\") pod \"nova-cell0-1d07-account-create-update-m8fgm\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.325485 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8ws\" (UniqueName: \"kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.325570 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.327246 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.352147 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8ws\" (UniqueName: \"kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws\") pod \"nova-cell1-db47-account-create-update-9bncr\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.356525 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.400842 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-phgms"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.403987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerStarted","Data":"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531"} Jan 03 04:36:21 crc kubenswrapper[4865]: W0103 04:36:21.417863 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49d8ce9c_3980_4c17_b824_ee567eb03edd.slice/crio-9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243 WatchSource:0}: Error finding container 9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243: Status 404 returned error can't find the container with id 9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243 Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.445215 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.540111 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.654847 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b0b1-account-create-update-mlj6w"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.678779 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5tjfp"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.832058 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4wmtx"] Jan 03 04:36:21 crc kubenswrapper[4865]: I0103 04:36:21.871689 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db47-account-create-update-9bncr"] Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.159924 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.167084 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1d07-account-create-update-m8fgm"] Jan 03 04:36:22 crc kubenswrapper[4865]: W0103 04:36:22.173051 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65fe916e_b334_4fb6_9e63_8491fa16cffc.slice/crio-2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8 WatchSource:0}: Error finding container 2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8: Status 404 returned error can't find the container with id 2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8 Jan 03 04:36:22 crc kubenswrapper[4865]: E0103 04:36:22.450751 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d88ffb_d26d_4f63_980c_4313f401541c.slice/crio-conmon-1cfc267180ee50c51f7d0b17fcbded1620bcd2dbab832a75eec41ab5020c8edd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d88ffb_d26d_4f63_980c_4313f401541c.slice/crio-1cfc267180ee50c51f7d0b17fcbded1620bcd2dbab832a75eec41ab5020c8edd.scope\": RecentStats: unable to find data in memory cache]" Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.475868 4865 generic.go:334] "Generic (PLEG): container finished" podID="e4d88ffb-d26d-4f63-980c-4313f401541c" containerID="1cfc267180ee50c51f7d0b17fcbded1620bcd2dbab832a75eec41ab5020c8edd" exitCode=0 Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.476355 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5tjfp" event={"ID":"e4d88ffb-d26d-4f63-980c-4313f401541c","Type":"ContainerDied","Data":"1cfc267180ee50c51f7d0b17fcbded1620bcd2dbab832a75eec41ab5020c8edd"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.476418 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5tjfp" event={"ID":"e4d88ffb-d26d-4f63-980c-4313f401541c","Type":"ContainerStarted","Data":"68bc928133eeb8598698fb2d17655ad918522a8bc493f175b808eaa86581095b"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.483623 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db47-account-create-update-9bncr" event={"ID":"4c15343b-751f-4ae7-8f7b-6fc5714d4d16","Type":"ContainerStarted","Data":"3e705aeec1985a870106af6ddcd50d97528e28eafa6b50b51969f029259c1b4f"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.483667 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db47-account-create-update-9bncr" event={"ID":"4c15343b-751f-4ae7-8f7b-6fc5714d4d16","Type":"ContainerStarted","Data":"f1125c0e5458bc397f77cbbe077fcd0353275a49a7cd84aa772394db715cf688"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.494022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" event={"ID":"5340eb93-db7a-4d2f-b33c-c3f5913c12cc","Type":"ContainerStarted","Data":"372d9ac7e02cba3b2d3ec7becc1719b6eda049c584aa730c77ff084618c867a4"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.494088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" event={"ID":"5340eb93-db7a-4d2f-b33c-c3f5913c12cc","Type":"ContainerStarted","Data":"2006ab5180e3d4248ca39b568231345de8852e470bdc46a96bf6712d71644912"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.497681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4wmtx" event={"ID":"7b7b3518-8b72-4c31-8232-98f2fd0d4966","Type":"ContainerStarted","Data":"31f9beaaf2b8d6137497567a7ce3c87ef472cd1b45aaee2c197282db87e34db3"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.497897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4wmtx" event={"ID":"7b7b3518-8b72-4c31-8232-98f2fd0d4966","Type":"ContainerStarted","Data":"3c3fe98c0e6653a160fb1f9bb2c36d2c4943344f3a0a7ad4ef78856b2b276556"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.500248 4865 generic.go:334] "Generic (PLEG): container finished" podID="49d8ce9c-3980-4c17-b824-ee567eb03edd" containerID="fd757cee28b8e6b3a180c118d7af6c71a7dd9d0556a61e4c3b5e9191540eb5bb" exitCode=0 Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.500298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-phgms" event={"ID":"49d8ce9c-3980-4c17-b824-ee567eb03edd","Type":"ContainerDied","Data":"fd757cee28b8e6b3a180c118d7af6c71a7dd9d0556a61e4c3b5e9191540eb5bb"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.500317 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-phgms" event={"ID":"49d8ce9c-3980-4c17-b824-ee567eb03edd","Type":"ContainerStarted","Data":"9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.504247 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0","Type":"ContainerStarted","Data":"9565e5fec39d8f015c031dc1971d01883f6947dee976808ad6e932b85a96061c"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.506086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" event={"ID":"65fe916e-b334-4fb6-9e63-8491fa16cffc","Type":"ContainerStarted","Data":"2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8"} Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.506454 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db47-account-create-update-9bncr" podStartSLOduration=1.506443097 podStartE2EDuration="1.506443097s" podCreationTimestamp="2026-01-03 04:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:22.502277604 +0000 UTC m=+1209.619330789" watchObservedRunningTime="2026-01-03 04:36:22.506443097 +0000 UTC m=+1209.623496282" Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.518466 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" podStartSLOduration=2.518448401 podStartE2EDuration="2.518448401s" podCreationTimestamp="2026-01-03 04:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:22.5120903 +0000 UTC m=+1209.629143485" watchObservedRunningTime="2026-01-03 04:36:22.518448401 +0000 UTC m=+1209.635501586" Jan 03 04:36:22 crc kubenswrapper[4865]: I0103 04:36:22.556984 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4wmtx" podStartSLOduration=2.556965563 podStartE2EDuration="2.556965563s" podCreationTimestamp="2026-01-03 04:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:22.548310059 +0000 UTC m=+1209.665363244" watchObservedRunningTime="2026-01-03 04:36:22.556965563 +0000 UTC m=+1209.674018768" Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.518482 4865 generic.go:334] "Generic (PLEG): container finished" podID="65fe916e-b334-4fb6-9e63-8491fa16cffc" containerID="888ce525b5cb51762119ce0807fe0b4f18a44666c63fe3170dd19697dec3bce2" exitCode=0 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.518541 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" event={"ID":"65fe916e-b334-4fb6-9e63-8491fa16cffc","Type":"ContainerDied","Data":"888ce525b5cb51762119ce0807fe0b4f18a44666c63fe3170dd19697dec3bce2"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.526988 4865 generic.go:334] "Generic (PLEG): container finished" podID="4c15343b-751f-4ae7-8f7b-6fc5714d4d16" containerID="3e705aeec1985a870106af6ddcd50d97528e28eafa6b50b51969f029259c1b4f" exitCode=0 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.527144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db47-account-create-update-9bncr" event={"ID":"4c15343b-751f-4ae7-8f7b-6fc5714d4d16","Type":"ContainerDied","Data":"3e705aeec1985a870106af6ddcd50d97528e28eafa6b50b51969f029259c1b4f"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.530150 4865 generic.go:334] "Generic (PLEG): container finished" podID="5340eb93-db7a-4d2f-b33c-c3f5913c12cc" containerID="372d9ac7e02cba3b2d3ec7becc1719b6eda049c584aa730c77ff084618c867a4" exitCode=0 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.530206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" event={"ID":"5340eb93-db7a-4d2f-b33c-c3f5913c12cc","Type":"ContainerDied","Data":"372d9ac7e02cba3b2d3ec7becc1719b6eda049c584aa730c77ff084618c867a4"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.537882 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerStarted","Data":"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.538082 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-central-agent" containerID="cri-o://7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116" gracePeriod=30 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.538304 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.538338 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="proxy-httpd" containerID="cri-o://7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856" gracePeriod=30 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.538373 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="sg-core" containerID="cri-o://5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531" gracePeriod=30 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.538420 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-notification-agent" containerID="cri-o://bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549" gracePeriod=30 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.543561 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b7b3518-8b72-4c31-8232-98f2fd0d4966" containerID="31f9beaaf2b8d6137497567a7ce3c87ef472cd1b45aaee2c197282db87e34db3" exitCode=0 Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.543632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4wmtx" event={"ID":"7b7b3518-8b72-4c31-8232-98f2fd0d4966","Type":"ContainerDied","Data":"31f9beaaf2b8d6137497567a7ce3c87ef472cd1b45aaee2c197282db87e34db3"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.552638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0","Type":"ContainerStarted","Data":"53a1a2cf33aa282d195d7be40125b1f354d8989ca715ad8ff25796c6afac0c85"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.552677 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"879b9de0-9d7c-46dc-b9b1-80c16cbebaa0","Type":"ContainerStarted","Data":"da68a30e235e5352f7d462233560fceeae44b676d429131b92d0cf783f7757e8"} Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.624056 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5358751059999998 podStartE2EDuration="6.624033705s" podCreationTimestamp="2026-01-03 04:36:17 +0000 UTC" firstStartedPulling="2026-01-03 04:36:18.150466248 +0000 UTC m=+1205.267519453" lastFinishedPulling="2026-01-03 04:36:22.238624857 +0000 UTC m=+1209.355678052" observedRunningTime="2026-01-03 04:36:23.615921205 +0000 UTC m=+1210.732974390" watchObservedRunningTime="2026-01-03 04:36:23.624033705 +0000 UTC m=+1210.741086890" Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.638586 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.638568987 podStartE2EDuration="3.638568987s" podCreationTimestamp="2026-01-03 04:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:36:23.637751616 +0000 UTC m=+1210.754804801" watchObservedRunningTime="2026-01-03 04:36:23.638568987 +0000 UTC m=+1210.755622162" Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.959017 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:23 crc kubenswrapper[4865]: I0103 04:36:23.973698 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.129278 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts\") pod \"e4d88ffb-d26d-4f63-980c-4313f401541c\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.129454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts\") pod \"49d8ce9c-3980-4c17-b824-ee567eb03edd\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.129489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnbgb\" (UniqueName: \"kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb\") pod \"e4d88ffb-d26d-4f63-980c-4313f401541c\" (UID: \"e4d88ffb-d26d-4f63-980c-4313f401541c\") " Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.129555 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trbdl\" (UniqueName: \"kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl\") pod \"49d8ce9c-3980-4c17-b824-ee567eb03edd\" (UID: \"49d8ce9c-3980-4c17-b824-ee567eb03edd\") " Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.130275 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49d8ce9c-3980-4c17-b824-ee567eb03edd" (UID: "49d8ce9c-3980-4c17-b824-ee567eb03edd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.130432 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d88ffb-d26d-4f63-980c-4313f401541c" (UID: "e4d88ffb-d26d-4f63-980c-4313f401541c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.138904 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb" (OuterVolumeSpecName: "kube-api-access-cnbgb") pod "e4d88ffb-d26d-4f63-980c-4313f401541c" (UID: "e4d88ffb-d26d-4f63-980c-4313f401541c"). InnerVolumeSpecName "kube-api-access-cnbgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.156593 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl" (OuterVolumeSpecName: "kube-api-access-trbdl") pod "49d8ce9c-3980-4c17-b824-ee567eb03edd" (UID: "49d8ce9c-3980-4c17-b824-ee567eb03edd"). InnerVolumeSpecName "kube-api-access-trbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.234584 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d8ce9c-3980-4c17-b824-ee567eb03edd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.234782 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnbgb\" (UniqueName: \"kubernetes.io/projected/e4d88ffb-d26d-4f63-980c-4313f401541c-kube-api-access-cnbgb\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.234903 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trbdl\" (UniqueName: \"kubernetes.io/projected/49d8ce9c-3980-4c17-b824-ee567eb03edd-kube-api-access-trbdl\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.235016 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d88ffb-d26d-4f63-980c-4313f401541c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.567540 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-phgms" event={"ID":"49d8ce9c-3980-4c17-b824-ee567eb03edd","Type":"ContainerDied","Data":"9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243"} Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.569179 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec375dbf6d7562e1ce5f5c6c3c018405c705ed499d04b28b393c6359c25c243" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.567827 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-phgms" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.569490 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5tjfp" event={"ID":"e4d88ffb-d26d-4f63-980c-4313f401541c","Type":"ContainerDied","Data":"68bc928133eeb8598698fb2d17655ad918522a8bc493f175b808eaa86581095b"} Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.569896 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bc928133eeb8598698fb2d17655ad918522a8bc493f175b808eaa86581095b" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.569573 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5tjfp" Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.573880 4865 generic.go:334] "Generic (PLEG): container finished" podID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerID="7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856" exitCode=0 Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.573931 4865 generic.go:334] "Generic (PLEG): container finished" podID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerID="5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531" exitCode=2 Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.573947 4865 generic.go:334] "Generic (PLEG): container finished" podID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerID="bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549" exitCode=0 Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.574202 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerDied","Data":"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856"} Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.574266 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerDied","Data":"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531"} Jan 03 04:36:24 crc kubenswrapper[4865]: I0103 04:36:24.574298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerDied","Data":"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549"} Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.058043 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.151794 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts\") pod \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.152030 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8ws\" (UniqueName: \"kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws\") pod \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\" (UID: \"4c15343b-751f-4ae7-8f7b-6fc5714d4d16\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.153783 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c15343b-751f-4ae7-8f7b-6fc5714d4d16" (UID: "4c15343b-751f-4ae7-8f7b-6fc5714d4d16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.159986 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws" (OuterVolumeSpecName: "kube-api-access-gz8ws") pod "4c15343b-751f-4ae7-8f7b-6fc5714d4d16" (UID: "4c15343b-751f-4ae7-8f7b-6fc5714d4d16"). InnerVolumeSpecName "kube-api-access-gz8ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.223462 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.228848 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.234793 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.265729 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8ws\" (UniqueName: \"kubernetes.io/projected/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-kube-api-access-gz8ws\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.265763 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c15343b-751f-4ae7-8f7b-6fc5714d4d16-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.366883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts\") pod \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.366981 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg6m6\" (UniqueName: \"kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6\") pod \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\" (UID: \"7b7b3518-8b72-4c31-8232-98f2fd0d4966\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367010 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47js2\" (UniqueName: \"kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2\") pod \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts\") pod \"65fe916e-b334-4fb6-9e63-8491fa16cffc\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367135 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts\") pod \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\" (UID: \"5340eb93-db7a-4d2f-b33c-c3f5913c12cc\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367191 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc5l4\" (UniqueName: \"kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4\") pod \"65fe916e-b334-4fb6-9e63-8491fa16cffc\" (UID: \"65fe916e-b334-4fb6-9e63-8491fa16cffc\") " Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367508 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65fe916e-b334-4fb6-9e63-8491fa16cffc" (UID: "65fe916e-b334-4fb6-9e63-8491fa16cffc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367660 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5340eb93-db7a-4d2f-b33c-c3f5913c12cc" (UID: "5340eb93-db7a-4d2f-b33c-c3f5913c12cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.367737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b7b3518-8b72-4c31-8232-98f2fd0d4966" (UID: "7b7b3518-8b72-4c31-8232-98f2fd0d4966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.370369 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4" (OuterVolumeSpecName: "kube-api-access-vc5l4") pod "65fe916e-b334-4fb6-9e63-8491fa16cffc" (UID: "65fe916e-b334-4fb6-9e63-8491fa16cffc"). InnerVolumeSpecName "kube-api-access-vc5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.370929 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2" (OuterVolumeSpecName: "kube-api-access-47js2") pod "5340eb93-db7a-4d2f-b33c-c3f5913c12cc" (UID: "5340eb93-db7a-4d2f-b33c-c3f5913c12cc"). InnerVolumeSpecName "kube-api-access-47js2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.371482 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6" (OuterVolumeSpecName: "kube-api-access-bg6m6") pod "7b7b3518-8b72-4c31-8232-98f2fd0d4966" (UID: "7b7b3518-8b72-4c31-8232-98f2fd0d4966"). InnerVolumeSpecName "kube-api-access-bg6m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468777 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7b3518-8b72-4c31-8232-98f2fd0d4966-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468807 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg6m6\" (UniqueName: \"kubernetes.io/projected/7b7b3518-8b72-4c31-8232-98f2fd0d4966-kube-api-access-bg6m6\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468817 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47js2\" (UniqueName: \"kubernetes.io/projected/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-kube-api-access-47js2\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468827 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65fe916e-b334-4fb6-9e63-8491fa16cffc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468835 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5340eb93-db7a-4d2f-b33c-c3f5913c12cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.468844 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc5l4\" (UniqueName: \"kubernetes.io/projected/65fe916e-b334-4fb6-9e63-8491fa16cffc-kube-api-access-vc5l4\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.583506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4wmtx" event={"ID":"7b7b3518-8b72-4c31-8232-98f2fd0d4966","Type":"ContainerDied","Data":"3c3fe98c0e6653a160fb1f9bb2c36d2c4943344f3a0a7ad4ef78856b2b276556"} Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.583946 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3fe98c0e6653a160fb1f9bb2c36d2c4943344f3a0a7ad4ef78856b2b276556" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.583625 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4wmtx" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.585177 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" event={"ID":"65fe916e-b334-4fb6-9e63-8491fa16cffc","Type":"ContainerDied","Data":"2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8"} Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.585189 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d07-account-create-update-m8fgm" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.585200 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ede453fff6ef7aac49d7630978851a644250bbad5c680b769575cf9677da8d8" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.586755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db47-account-create-update-9bncr" event={"ID":"4c15343b-751f-4ae7-8f7b-6fc5714d4d16","Type":"ContainerDied","Data":"f1125c0e5458bc397f77cbbe077fcd0353275a49a7cd84aa772394db715cf688"} Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.586781 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1125c0e5458bc397f77cbbe077fcd0353275a49a7cd84aa772394db715cf688" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.586841 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db47-account-create-update-9bncr" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.588932 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" event={"ID":"5340eb93-db7a-4d2f-b33c-c3f5913c12cc","Type":"ContainerDied","Data":"2006ab5180e3d4248ca39b568231345de8852e470bdc46a96bf6712d71644912"} Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.588957 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2006ab5180e3d4248ca39b568231345de8852e470bdc46a96bf6712d71644912" Jan 03 04:36:25 crc kubenswrapper[4865]: I0103 04:36:25.589005 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b0b1-account-create-update-mlj6w" Jan 03 04:36:27 crc kubenswrapper[4865]: I0103 04:36:27.703935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:27 crc kubenswrapper[4865]: I0103 04:36:27.705186 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:27 crc kubenswrapper[4865]: I0103 04:36:27.737348 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:27 crc kubenswrapper[4865]: I0103 04:36:27.743577 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:28 crc kubenswrapper[4865]: I0103 04:36:28.618073 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:28 crc kubenswrapper[4865]: I0103 04:36:28.618281 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.541060 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.634386 4865 generic.go:334] "Generic (PLEG): container finished" podID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerID="7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116" exitCode=0 Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.634501 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.634552 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerDied","Data":"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116"} Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.634590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"084cf46b-cebc-47eb-8fde-58e5a971d79b","Type":"ContainerDied","Data":"16f10f0caafa5594571e5d898c970a1ec4f911d445e6b65495140e512dc77afb"} Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.634608 4865 scope.go:117] "RemoveContainer" containerID="7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648308 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648415 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648449 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648509 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648550 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mjz\" (UniqueName: \"kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648656 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.648709 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml\") pod \"084cf46b-cebc-47eb-8fde-58e5a971d79b\" (UID: \"084cf46b-cebc-47eb-8fde-58e5a971d79b\") " Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.650094 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.650818 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.657078 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz" (OuterVolumeSpecName: "kube-api-access-k2mjz") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "kube-api-access-k2mjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.657935 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts" (OuterVolumeSpecName: "scripts") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.666829 4865 scope.go:117] "RemoveContainer" containerID="5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.680973 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.751200 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.751466 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.751477 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/084cf46b-cebc-47eb-8fde-58e5a971d79b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.751486 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.751496 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mjz\" (UniqueName: \"kubernetes.io/projected/084cf46b-cebc-47eb-8fde-58e5a971d79b-kube-api-access-k2mjz\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.762224 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.776643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data" (OuterVolumeSpecName: "config-data") pod "084cf46b-cebc-47eb-8fde-58e5a971d79b" (UID: "084cf46b-cebc-47eb-8fde-58e5a971d79b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.853014 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.853053 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/084cf46b-cebc-47eb-8fde-58e5a971d79b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.861231 4865 scope.go:117] "RemoveContainer" containerID="bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.879460 4865 scope.go:117] "RemoveContainer" containerID="7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.898258 4865 scope.go:117] "RemoveContainer" containerID="7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856" Jan 03 04:36:29 crc kubenswrapper[4865]: E0103 04:36:29.898819 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856\": container with ID starting with 7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856 not found: ID does not exist" containerID="7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.898892 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856"} err="failed to get container status \"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856\": rpc error: code = NotFound desc = could not find container \"7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856\": container with ID starting with 7c8a6a370af715bb5ed5aa9ae0f4d5454821fd16d7d72ac070640b9b8f6c6856 not found: ID does not exist" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.898921 4865 scope.go:117] "RemoveContainer" containerID="5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531" Jan 03 04:36:29 crc kubenswrapper[4865]: E0103 04:36:29.899361 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531\": container with ID starting with 5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531 not found: ID does not exist" containerID="5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.899432 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531"} err="failed to get container status \"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531\": rpc error: code = NotFound desc = could not find container \"5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531\": container with ID starting with 5c9199739a09f0fd7fbc35b989a9006265162510ac47404f58f2d7c5e66a2531 not found: ID does not exist" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.899463 4865 scope.go:117] "RemoveContainer" containerID="bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549" Jan 03 04:36:29 crc kubenswrapper[4865]: E0103 04:36:29.899820 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549\": container with ID starting with bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549 not found: ID does not exist" containerID="bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.899854 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549"} err="failed to get container status \"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549\": rpc error: code = NotFound desc = could not find container \"bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549\": container with ID starting with bc5382483d288902b8356344bda7dea212866c950d13aa8b464311a43939f549 not found: ID does not exist" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.899875 4865 scope.go:117] "RemoveContainer" containerID="7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116" Jan 03 04:36:29 crc kubenswrapper[4865]: E0103 04:36:29.900273 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116\": container with ID starting with 7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116 not found: ID does not exist" containerID="7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.900301 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116"} err="failed to get container status \"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116\": rpc error: code = NotFound desc = could not find container \"7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116\": container with ID starting with 7658636e8962c444fd5c06c7c0998066e73073aaf576d792f26a21fa0b403116 not found: ID does not exist" Jan 03 04:36:29 crc kubenswrapper[4865]: I0103 04:36:29.979651 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.000805 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.016808 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017400 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c15343b-751f-4ae7-8f7b-6fc5714d4d16" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017491 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c15343b-751f-4ae7-8f7b-6fc5714d4d16" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017521 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7b3518-8b72-4c31-8232-98f2fd0d4966" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017536 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7b3518-8b72-4c31-8232-98f2fd0d4966" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017554 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fe916e-b334-4fb6-9e63-8491fa16cffc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017568 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fe916e-b334-4fb6-9e63-8491fa16cffc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017595 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="sg-core" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017608 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="sg-core" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017632 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d8ce9c-3980-4c17-b824-ee567eb03edd" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017646 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d8ce9c-3980-4c17-b824-ee567eb03edd" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017667 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d88ffb-d26d-4f63-980c-4313f401541c" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017679 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d88ffb-d26d-4f63-980c-4313f401541c" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017702 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-central-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017716 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-central-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017733 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-notification-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017746 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-notification-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017773 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="proxy-httpd" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017785 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="proxy-httpd" Jan 03 04:36:30 crc kubenswrapper[4865]: E0103 04:36:30.017801 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5340eb93-db7a-4d2f-b33c-c3f5913c12cc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.017815 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5340eb93-db7a-4d2f-b33c-c3f5913c12cc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018128 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d88ffb-d26d-4f63-980c-4313f401541c" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018154 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fe916e-b334-4fb6-9e63-8491fa16cffc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018171 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="sg-core" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018196 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c15343b-751f-4ae7-8f7b-6fc5714d4d16" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018216 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d8ce9c-3980-4c17-b824-ee567eb03edd" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018239 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5340eb93-db7a-4d2f-b33c-c3f5913c12cc" containerName="mariadb-account-create-update" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018267 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7b3518-8b72-4c31-8232-98f2fd0d4966" containerName="mariadb-database-create" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018289 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-notification-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018308 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="proxy-httpd" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.018338 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" containerName="ceilometer-central-agent" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.021350 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.026134 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.026402 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.030115 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.157679 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.157761 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq5g\" (UniqueName: \"kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.157805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.157887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.157942 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.158033 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.158090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260070 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260225 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260348 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvq5g\" (UniqueName: \"kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.260954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.261271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.265465 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.265642 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.274208 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.275130 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.287343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvq5g\" (UniqueName: \"kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g\") pod \"ceilometer-0\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.340838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.502198 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.503036 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 03 04:36:30 crc kubenswrapper[4865]: I0103 04:36:30.775670 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.150686 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zwbkb"] Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.152004 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.154550 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.154836 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2gzb6" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.155897 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.229759 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084cf46b-cebc-47eb-8fde-58e5a971d79b" path="/var/lib/kubelet/pods/084cf46b-cebc-47eb-8fde-58e5a971d79b/volumes" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.230909 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zwbkb"] Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.318502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.318551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.318624 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.318683 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxfd\" (UniqueName: \"kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.420212 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.420656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pxfd\" (UniqueName: \"kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.420799 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.420887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.425190 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.425306 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.428179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.435469 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pxfd\" (UniqueName: \"kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd\") pod \"nova-cell0-conductor-db-sync-zwbkb\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.446454 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.446493 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.483065 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.488466 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.529972 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.709457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerStarted","Data":"03777575067eb343d1612d5f1918895180473a1af0ac4659c063c88623e53466"} Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.709743 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.709759 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerStarted","Data":"87d516c799727f7e9fa6867e633e1cb40e4aa6d93779a1221d18680f9888b475"} Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.709770 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 03 04:36:31 crc kubenswrapper[4865]: W0103 04:36:31.802503 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2155186b_b606_42e4_b728_f62c6c8b156a.slice/crio-bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47 WatchSource:0}: Error finding container bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47: Status 404 returned error can't find the container with id bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47 Jan 03 04:36:31 crc kubenswrapper[4865]: I0103 04:36:31.820358 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zwbkb"] Jan 03 04:36:32 crc kubenswrapper[4865]: I0103 04:36:32.253948 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:32 crc kubenswrapper[4865]: I0103 04:36:32.718837 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" event={"ID":"2155186b-b606-42e4-b728-f62c6c8b156a","Type":"ContainerStarted","Data":"bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47"} Jan 03 04:36:33 crc kubenswrapper[4865]: I0103 04:36:33.493157 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 04:36:33 crc kubenswrapper[4865]: I0103 04:36:33.715623 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 03 04:36:33 crc kubenswrapper[4865]: I0103 04:36:33.786671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerStarted","Data":"8266345a281d5537eb5fa46bef7d31caf27e2c7664d7319c4ac39196bbc4f1de"} Jan 03 04:36:34 crc kubenswrapper[4865]: I0103 04:36:34.801346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerStarted","Data":"af39f27ac9f6295912965f7eeee536edf082962aa908941c984c04eb10e77010"} Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerStarted","Data":"b3e60957b8a1ecd5ac8b87bfc99ba1bfb404d1f900e078e2396ab175d23cd4de"} Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812369 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-central-agent" containerID="cri-o://03777575067eb343d1612d5f1918895180473a1af0ac4659c063c88623e53466" gracePeriod=30 Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812587 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812601 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="proxy-httpd" containerID="cri-o://b3e60957b8a1ecd5ac8b87bfc99ba1bfb404d1f900e078e2396ab175d23cd4de" gracePeriod=30 Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812659 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="sg-core" containerID="cri-o://af39f27ac9f6295912965f7eeee536edf082962aa908941c984c04eb10e77010" gracePeriod=30 Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.812695 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-notification-agent" containerID="cri-o://8266345a281d5537eb5fa46bef7d31caf27e2c7664d7319c4ac39196bbc4f1de" gracePeriod=30 Jan 03 04:36:35 crc kubenswrapper[4865]: I0103 04:36:35.839892 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.873260556 podStartE2EDuration="6.839872301s" podCreationTimestamp="2026-01-03 04:36:29 +0000 UTC" firstStartedPulling="2026-01-03 04:36:30.782670248 +0000 UTC m=+1217.899723433" lastFinishedPulling="2026-01-03 04:36:34.749281993 +0000 UTC m=+1221.866335178" observedRunningTime="2026-01-03 04:36:35.832910803 +0000 UTC m=+1222.949963988" watchObservedRunningTime="2026-01-03 04:36:35.839872301 +0000 UTC m=+1222.956925496" Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.827926 4865 generic.go:334] "Generic (PLEG): container finished" podID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerID="b3e60957b8a1ecd5ac8b87bfc99ba1bfb404d1f900e078e2396ab175d23cd4de" exitCode=0 Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.827976 4865 generic.go:334] "Generic (PLEG): container finished" podID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerID="af39f27ac9f6295912965f7eeee536edf082962aa908941c984c04eb10e77010" exitCode=2 Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.827992 4865 generic.go:334] "Generic (PLEG): container finished" podID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerID="8266345a281d5537eb5fa46bef7d31caf27e2c7664d7319c4ac39196bbc4f1de" exitCode=0 Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.828021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerDied","Data":"b3e60957b8a1ecd5ac8b87bfc99ba1bfb404d1f900e078e2396ab175d23cd4de"} Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.828057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerDied","Data":"af39f27ac9f6295912965f7eeee536edf082962aa908941c984c04eb10e77010"} Jan 03 04:36:36 crc kubenswrapper[4865]: I0103 04:36:36.828076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerDied","Data":"8266345a281d5537eb5fa46bef7d31caf27e2c7664d7319c4ac39196bbc4f1de"} Jan 03 04:36:40 crc kubenswrapper[4865]: I0103 04:36:40.739893 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:36:40 crc kubenswrapper[4865]: I0103 04:36:40.740366 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:36:41 crc kubenswrapper[4865]: I0103 04:36:41.882712 4865 generic.go:334] "Generic (PLEG): container finished" podID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerID="03777575067eb343d1612d5f1918895180473a1af0ac4659c063c88623e53466" exitCode=0 Jan 03 04:36:41 crc kubenswrapper[4865]: I0103 04:36:41.882760 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerDied","Data":"03777575067eb343d1612d5f1918895180473a1af0ac4659c063c88623e53466"} Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.211202 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvq5g\" (UniqueName: \"kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233810 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233863 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233891 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233920 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.233973 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd\") pod \"b90458e4-d2b7-44c2-96d2-6f76b039221e\" (UID: \"b90458e4-d2b7-44c2-96d2-6f76b039221e\") " Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.236124 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.236617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.245460 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts" (OuterVolumeSpecName: "scripts") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.246536 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g" (OuterVolumeSpecName: "kube-api-access-hvq5g") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "kube-api-access-hvq5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.277657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.336337 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.336367 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvq5g\" (UniqueName: \"kubernetes.io/projected/b90458e4-d2b7-44c2-96d2-6f76b039221e-kube-api-access-hvq5g\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.336380 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.336440 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.336449 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b90458e4-d2b7-44c2-96d2-6f76b039221e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.343999 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.360317 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data" (OuterVolumeSpecName: "config-data") pod "b90458e4-d2b7-44c2-96d2-6f76b039221e" (UID: "b90458e4-d2b7-44c2-96d2-6f76b039221e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.437327 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.437360 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b90458e4-d2b7-44c2-96d2-6f76b039221e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.895219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b90458e4-d2b7-44c2-96d2-6f76b039221e","Type":"ContainerDied","Data":"87d516c799727f7e9fa6867e633e1cb40e4aa6d93779a1221d18680f9888b475"} Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.895238 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.895611 4865 scope.go:117] "RemoveContainer" containerID="b3e60957b8a1ecd5ac8b87bfc99ba1bfb404d1f900e078e2396ab175d23cd4de" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.899694 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" event={"ID":"2155186b-b606-42e4-b728-f62c6c8b156a","Type":"ContainerStarted","Data":"c7bad2c4ee62f3e3f80b2a3f615bd31e8d2d026d1bfe272216668d28ca4bf3f0"} Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.936309 4865 scope.go:117] "RemoveContainer" containerID="af39f27ac9f6295912965f7eeee536edf082962aa908941c984c04eb10e77010" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.964811 4865 scope.go:117] "RemoveContainer" containerID="8266345a281d5537eb5fa46bef7d31caf27e2c7664d7319c4ac39196bbc4f1de" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.969864 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" podStartSLOduration=1.8327903110000001 podStartE2EDuration="11.969825999s" podCreationTimestamp="2026-01-03 04:36:31 +0000 UTC" firstStartedPulling="2026-01-03 04:36:31.819604875 +0000 UTC m=+1218.936658060" lastFinishedPulling="2026-01-03 04:36:41.956640563 +0000 UTC m=+1229.073693748" observedRunningTime="2026-01-03 04:36:42.917632709 +0000 UTC m=+1230.034685904" watchObservedRunningTime="2026-01-03 04:36:42.969825999 +0000 UTC m=+1230.086879184" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.990398 4865 scope.go:117] "RemoveContainer" containerID="03777575067eb343d1612d5f1918895180473a1af0ac4659c063c88623e53466" Jan 03 04:36:42 crc kubenswrapper[4865]: I0103 04:36:42.993954 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.005010 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.022355 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:43 crc kubenswrapper[4865]: E0103 04:36:43.022914 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-central-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.022928 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-central-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: E0103 04:36:43.022936 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="proxy-httpd" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.022942 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="proxy-httpd" Jan 03 04:36:43 crc kubenswrapper[4865]: E0103 04:36:43.023612 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-notification-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023623 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-notification-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: E0103 04:36:43.023698 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="sg-core" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023706 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="sg-core" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023897 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-central-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023912 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="ceilometer-notification-agent" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023920 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="proxy-httpd" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.023930 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" containerName="sg-core" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.025568 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.027098 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.027215 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.032811 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.152439 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.153256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.153415 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67x2c\" (UniqueName: \"kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.153469 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.153677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.153863 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.154084 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.167567 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90458e4-d2b7-44c2-96d2-6f76b039221e" path="/var/lib/kubelet/pods/b90458e4-d2b7-44c2-96d2-6f76b039221e/volumes" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.255795 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.255892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.255974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.256021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.256067 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.256133 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67x2c\" (UniqueName: \"kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.256156 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.257429 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.257506 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.262473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.262511 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.263007 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.264139 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.274079 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67x2c\" (UniqueName: \"kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c\") pod \"ceilometer-0\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.345213 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.812021 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:43 crc kubenswrapper[4865]: I0103 04:36:43.912840 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerStarted","Data":"0f31a82d3d088bbeb91fe85b67daab1a389e7247c38690474eceaa626ac78673"} Jan 03 04:36:44 crc kubenswrapper[4865]: I0103 04:36:44.925612 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerStarted","Data":"b48bb0dba9bd9bef00906ef9ba7089a777e0bc67413915635d53cd003b179636"} Jan 03 04:36:45 crc kubenswrapper[4865]: I0103 04:36:45.214337 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:45 crc kubenswrapper[4865]: I0103 04:36:45.941496 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerStarted","Data":"5c7a1ebb8936117c58f05b67b9087c4d8abb097121f12d00663d290aa95ad904"} Jan 03 04:36:46 crc kubenswrapper[4865]: I0103 04:36:46.966351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerStarted","Data":"74ded539669ecd9f5a29453bafb824e850c1e438ea2bcc0b5d104bf7e2c82bc5"} Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.992110 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerStarted","Data":"17be18817b27949a84154e6419572bc721f2a2a9a8325e656bef8db03bc79909"} Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.992927 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-central-agent" containerID="cri-o://b48bb0dba9bd9bef00906ef9ba7089a777e0bc67413915635d53cd003b179636" gracePeriod=30 Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.993311 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.993775 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="proxy-httpd" containerID="cri-o://17be18817b27949a84154e6419572bc721f2a2a9a8325e656bef8db03bc79909" gracePeriod=30 Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.993909 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="sg-core" containerID="cri-o://74ded539669ecd9f5a29453bafb824e850c1e438ea2bcc0b5d104bf7e2c82bc5" gracePeriod=30 Jan 03 04:36:48 crc kubenswrapper[4865]: I0103 04:36:48.993965 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-notification-agent" containerID="cri-o://5c7a1ebb8936117c58f05b67b9087c4d8abb097121f12d00663d290aa95ad904" gracePeriod=30 Jan 03 04:36:49 crc kubenswrapper[4865]: I0103 04:36:49.021445 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.654470644 podStartE2EDuration="7.02142387s" podCreationTimestamp="2026-01-03 04:36:42 +0000 UTC" firstStartedPulling="2026-01-03 04:36:43.813094822 +0000 UTC m=+1230.930148017" lastFinishedPulling="2026-01-03 04:36:48.180048058 +0000 UTC m=+1235.297101243" observedRunningTime="2026-01-03 04:36:49.01254093 +0000 UTC m=+1236.129594125" watchObservedRunningTime="2026-01-03 04:36:49.02142387 +0000 UTC m=+1236.138477065" Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012585 4865 generic.go:334] "Generic (PLEG): container finished" podID="956606f5-54e9-48ff-938d-6934bdf48c49" containerID="17be18817b27949a84154e6419572bc721f2a2a9a8325e656bef8db03bc79909" exitCode=0 Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012654 4865 generic.go:334] "Generic (PLEG): container finished" podID="956606f5-54e9-48ff-938d-6934bdf48c49" containerID="74ded539669ecd9f5a29453bafb824e850c1e438ea2bcc0b5d104bf7e2c82bc5" exitCode=2 Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012674 4865 generic.go:334] "Generic (PLEG): container finished" podID="956606f5-54e9-48ff-938d-6934bdf48c49" containerID="5c7a1ebb8936117c58f05b67b9087c4d8abb097121f12d00663d290aa95ad904" exitCode=0 Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012713 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerDied","Data":"17be18817b27949a84154e6419572bc721f2a2a9a8325e656bef8db03bc79909"} Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012804 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerDied","Data":"74ded539669ecd9f5a29453bafb824e850c1e438ea2bcc0b5d104bf7e2c82bc5"} Jan 03 04:36:50 crc kubenswrapper[4865]: I0103 04:36:50.012840 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerDied","Data":"5c7a1ebb8936117c58f05b67b9087c4d8abb097121f12d00663d290aa95ad904"} Jan 03 04:36:51 crc kubenswrapper[4865]: I0103 04:36:51.052438 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerDied","Data":"b48bb0dba9bd9bef00906ef9ba7089a777e0bc67413915635d53cd003b179636"} Jan 03 04:36:51 crc kubenswrapper[4865]: I0103 04:36:51.052315 4865 generic.go:334] "Generic (PLEG): container finished" podID="956606f5-54e9-48ff-938d-6934bdf48c49" containerID="b48bb0dba9bd9bef00906ef9ba7089a777e0bc67413915635d53cd003b179636" exitCode=0 Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.110158 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.116490 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"956606f5-54e9-48ff-938d-6934bdf48c49","Type":"ContainerDied","Data":"0f31a82d3d088bbeb91fe85b67daab1a389e7247c38690474eceaa626ac78673"} Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.116539 4865 scope.go:117] "RemoveContainer" containerID="17be18817b27949a84154e6419572bc721f2a2a9a8325e656bef8db03bc79909" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.116547 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.149898 4865 scope.go:117] "RemoveContainer" containerID="74ded539669ecd9f5a29453bafb824e850c1e438ea2bcc0b5d104bf7e2c82bc5" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.176696 4865 scope.go:117] "RemoveContainer" containerID="5c7a1ebb8936117c58f05b67b9087c4d8abb097121f12d00663d290aa95ad904" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.193919 4865 scope.go:117] "RemoveContainer" containerID="b48bb0dba9bd9bef00906ef9ba7089a777e0bc67413915635d53cd003b179636" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232541 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67x2c\" (UniqueName: \"kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232631 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232767 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232798 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.232821 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml\") pod \"956606f5-54e9-48ff-938d-6934bdf48c49\" (UID: \"956606f5-54e9-48ff-938d-6934bdf48c49\") " Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.233594 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.233968 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.238814 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts" (OuterVolumeSpecName: "scripts") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.251897 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c" (OuterVolumeSpecName: "kube-api-access-67x2c") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "kube-api-access-67x2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.262271 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.333257 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335126 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67x2c\" (UniqueName: \"kubernetes.io/projected/956606f5-54e9-48ff-938d-6934bdf48c49-kube-api-access-67x2c\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335155 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335165 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335174 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/956606f5-54e9-48ff-938d-6934bdf48c49-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335182 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.335190 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.366302 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data" (OuterVolumeSpecName: "config-data") pod "956606f5-54e9-48ff-938d-6934bdf48c49" (UID: "956606f5-54e9-48ff-938d-6934bdf48c49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.436989 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956606f5-54e9-48ff-938d-6934bdf48c49-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.458519 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.471139 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498232 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:55 crc kubenswrapper[4865]: E0103 04:36:55.498595 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-notification-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498613 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-notification-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: E0103 04:36:55.498637 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="proxy-httpd" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498644 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="proxy-httpd" Jan 03 04:36:55 crc kubenswrapper[4865]: E0103 04:36:55.498661 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="sg-core" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498667 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="sg-core" Jan 03 04:36:55 crc kubenswrapper[4865]: E0103 04:36:55.498681 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-central-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498687 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-central-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498859 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="proxy-httpd" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498872 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-notification-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498887 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="sg-core" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.498897 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" containerName="ceilometer-central-agent" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.500416 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.503045 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.504201 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.513779 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538585 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538648 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2k2\" (UniqueName: \"kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538813 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.538912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641068 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2k2\" (UniqueName: \"kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.641376 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.642094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.642226 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.646050 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.646533 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.648788 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.649255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.675882 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2k2\" (UniqueName: \"kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2\") pod \"ceilometer-0\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " pod="openstack/ceilometer-0" Jan 03 04:36:55 crc kubenswrapper[4865]: I0103 04:36:55.825438 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:36:56 crc kubenswrapper[4865]: I0103 04:36:56.370489 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:36:57 crc kubenswrapper[4865]: I0103 04:36:57.146043 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerStarted","Data":"ceef78671e87413a9f2aba33eee7487f0bd2bed72dbb68e8573284bb8d688929"} Jan 03 04:36:57 crc kubenswrapper[4865]: I0103 04:36:57.169429 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956606f5-54e9-48ff-938d-6934bdf48c49" path="/var/lib/kubelet/pods/956606f5-54e9-48ff-938d-6934bdf48c49/volumes" Jan 03 04:36:58 crc kubenswrapper[4865]: I0103 04:36:58.159353 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerStarted","Data":"984df135619470d8f883e0c3fd5e174aab8ad0c948074fabf2392b9e582aaec9"} Jan 03 04:36:58 crc kubenswrapper[4865]: I0103 04:36:58.159740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerStarted","Data":"922e1a99cf62121b5c9cba4f55e20d0cd64238d995a35e69647babd5463eaf49"} Jan 03 04:36:59 crc kubenswrapper[4865]: I0103 04:36:59.170358 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerStarted","Data":"af63e41a4cf78979d44eb4d2bf375fd491cbe0fc52cc057711d66aa381c1b922"} Jan 03 04:37:00 crc kubenswrapper[4865]: I0103 04:37:00.183237 4865 generic.go:334] "Generic (PLEG): container finished" podID="2155186b-b606-42e4-b728-f62c6c8b156a" containerID="c7bad2c4ee62f3e3f80b2a3f615bd31e8d2d026d1bfe272216668d28ca4bf3f0" exitCode=0 Jan 03 04:37:00 crc kubenswrapper[4865]: I0103 04:37:00.183436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" event={"ID":"2155186b-b606-42e4-b728-f62c6c8b156a","Type":"ContainerDied","Data":"c7bad2c4ee62f3e3f80b2a3f615bd31e8d2d026d1bfe272216668d28ca4bf3f0"} Jan 03 04:37:00 crc kubenswrapper[4865]: I0103 04:37:00.191766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerStarted","Data":"252b0b9da1213b561e3d4a97786619216a806e9eb63c7c756e0cbc2141a06093"} Jan 03 04:37:00 crc kubenswrapper[4865]: I0103 04:37:00.192215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:37:00 crc kubenswrapper[4865]: I0103 04:37:00.228783 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6776089 podStartE2EDuration="5.228766167s" podCreationTimestamp="2026-01-03 04:36:55 +0000 UTC" firstStartedPulling="2026-01-03 04:36:56.374129498 +0000 UTC m=+1243.491182703" lastFinishedPulling="2026-01-03 04:36:59.925286765 +0000 UTC m=+1247.042339970" observedRunningTime="2026-01-03 04:37:00.219723973 +0000 UTC m=+1247.336777168" watchObservedRunningTime="2026-01-03 04:37:00.228766167 +0000 UTC m=+1247.345819352" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.582261 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.786440 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pxfd\" (UniqueName: \"kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd\") pod \"2155186b-b606-42e4-b728-f62c6c8b156a\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.786636 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts\") pod \"2155186b-b606-42e4-b728-f62c6c8b156a\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.786735 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data\") pod \"2155186b-b606-42e4-b728-f62c6c8b156a\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.786790 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle\") pod \"2155186b-b606-42e4-b728-f62c6c8b156a\" (UID: \"2155186b-b606-42e4-b728-f62c6c8b156a\") " Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.791733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd" (OuterVolumeSpecName: "kube-api-access-9pxfd") pod "2155186b-b606-42e4-b728-f62c6c8b156a" (UID: "2155186b-b606-42e4-b728-f62c6c8b156a"). InnerVolumeSpecName "kube-api-access-9pxfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.792498 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts" (OuterVolumeSpecName: "scripts") pod "2155186b-b606-42e4-b728-f62c6c8b156a" (UID: "2155186b-b606-42e4-b728-f62c6c8b156a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.833289 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2155186b-b606-42e4-b728-f62c6c8b156a" (UID: "2155186b-b606-42e4-b728-f62c6c8b156a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.835376 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data" (OuterVolumeSpecName: "config-data") pod "2155186b-b606-42e4-b728-f62c6c8b156a" (UID: "2155186b-b606-42e4-b728-f62c6c8b156a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.893686 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pxfd\" (UniqueName: \"kubernetes.io/projected/2155186b-b606-42e4-b728-f62c6c8b156a-kube-api-access-9pxfd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.893723 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.893735 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:01 crc kubenswrapper[4865]: I0103 04:37:01.893747 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2155186b-b606-42e4-b728-f62c6c8b156a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.213644 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" event={"ID":"2155186b-b606-42e4-b728-f62c6c8b156a","Type":"ContainerDied","Data":"bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47"} Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.213879 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd77bfb79d21a17f0395c12bacdafba768683120980f582479803de337f49b47" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.213961 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zwbkb" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.321934 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 04:37:02 crc kubenswrapper[4865]: E0103 04:37:02.322471 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2155186b-b606-42e4-b728-f62c6c8b156a" containerName="nova-cell0-conductor-db-sync" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.322493 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2155186b-b606-42e4-b728-f62c6c8b156a" containerName="nova-cell0-conductor-db-sync" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.322748 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2155186b-b606-42e4-b728-f62c6c8b156a" containerName="nova-cell0-conductor-db-sync" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.323602 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.325838 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.330760 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2gzb6" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.336111 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.507760 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.507860 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhkrb\" (UniqueName: \"kubernetes.io/projected/a5fca133-2a81-47d7-8f20-62c55d10c3e6-kube-api-access-hhkrb\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.507941 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.611072 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.611178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhkrb\" (UniqueName: \"kubernetes.io/projected/a5fca133-2a81-47d7-8f20-62c55d10c3e6-kube-api-access-hhkrb\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.611259 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.616276 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.616303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fca133-2a81-47d7-8f20-62c55d10c3e6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.637170 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhkrb\" (UniqueName: \"kubernetes.io/projected/a5fca133-2a81-47d7-8f20-62c55d10c3e6-kube-api-access-hhkrb\") pod \"nova-cell0-conductor-0\" (UID: \"a5fca133-2a81-47d7-8f20-62c55d10c3e6\") " pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.655343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:02 crc kubenswrapper[4865]: I0103 04:37:02.993726 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 03 04:37:03 crc kubenswrapper[4865]: W0103 04:37:03.002448 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5fca133_2a81_47d7_8f20_62c55d10c3e6.slice/crio-127b4665c5a6db60bf1e752e951261092b79a38bcc6fce32fbd253cfef0bf578 WatchSource:0}: Error finding container 127b4665c5a6db60bf1e752e951261092b79a38bcc6fce32fbd253cfef0bf578: Status 404 returned error can't find the container with id 127b4665c5a6db60bf1e752e951261092b79a38bcc6fce32fbd253cfef0bf578 Jan 03 04:37:03 crc kubenswrapper[4865]: I0103 04:37:03.228993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a5fca133-2a81-47d7-8f20-62c55d10c3e6","Type":"ContainerStarted","Data":"127b4665c5a6db60bf1e752e951261092b79a38bcc6fce32fbd253cfef0bf578"} Jan 03 04:37:04 crc kubenswrapper[4865]: I0103 04:37:04.242855 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a5fca133-2a81-47d7-8f20-62c55d10c3e6","Type":"ContainerStarted","Data":"e01db3bf166437488daf37e0692701790b029e9ffb168b222e58ff0296ffcaa3"} Jan 03 04:37:04 crc kubenswrapper[4865]: I0103 04:37:04.243291 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:04 crc kubenswrapper[4865]: I0103 04:37:04.278519 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.278453208 podStartE2EDuration="2.278453208s" podCreationTimestamp="2026-01-03 04:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:04.260619766 +0000 UTC m=+1251.377672981" watchObservedRunningTime="2026-01-03 04:37:04.278453208 +0000 UTC m=+1251.395506433" Jan 03 04:37:10 crc kubenswrapper[4865]: I0103 04:37:10.739720 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:37:10 crc kubenswrapper[4865]: I0103 04:37:10.740572 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:37:10 crc kubenswrapper[4865]: I0103 04:37:10.740641 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:37:10 crc kubenswrapper[4865]: I0103 04:37:10.741763 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:37:10 crc kubenswrapper[4865]: I0103 04:37:10.741829 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156" gracePeriod=600 Jan 03 04:37:11 crc kubenswrapper[4865]: I0103 04:37:11.308684 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156" exitCode=0 Jan 03 04:37:11 crc kubenswrapper[4865]: I0103 04:37:11.308769 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156"} Jan 03 04:37:11 crc kubenswrapper[4865]: I0103 04:37:11.308949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a"} Jan 03 04:37:11 crc kubenswrapper[4865]: I0103 04:37:11.309710 4865 scope.go:117] "RemoveContainer" containerID="443ad2de9df44972affb457676a84e83fbdcce8153e921cc5ed8476d8a4f6591" Jan 03 04:37:12 crc kubenswrapper[4865]: I0103 04:37:12.700651 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.252196 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-w5zt8"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.253580 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.255848 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.260432 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.266251 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w5zt8"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.346178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsrs\" (UniqueName: \"kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.346555 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.346597 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.346635 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.453346 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.456028 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.464670 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.466072 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.466270 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsrs\" (UniqueName: \"kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.466391 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.466475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.486331 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.492560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.515201 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.518114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsrs\" (UniqueName: \"kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.528247 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data\") pod \"nova-cell0-cell-mapping-w5zt8\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.538923 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.540518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.545819 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.570336 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.573603 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.573674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77hj\" (UniqueName: \"kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.576416 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.576725 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gbj\" (UniqueName: \"kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.576833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.576879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.576902 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.584842 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.673413 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.674932 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.681856 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.681935 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gbj\" (UniqueName: \"kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.685640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.685674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.685695 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.685746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.685824 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77hj\" (UniqueName: \"kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.688336 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.682327 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.701806 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.702262 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.706669 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77hj\" (UniqueName: \"kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj\") pod \"nova-api-0\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.712896 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gbj\" (UniqueName: \"kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.716840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.720483 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.722152 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.735308 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.736994 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.739747 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.777313 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787137 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787190 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhsn6\" (UniqueName: \"kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787280 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c55v\" (UniqueName: \"kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787825 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.787925 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.814978 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.820312 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.854325 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.877780 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889113 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c55v\" (UniqueName: \"kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889162 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889251 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889271 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889291 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhsn6\" (UniqueName: \"kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889312 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889348 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889420 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm48\" (UniqueName: \"kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889456 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.889965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.893212 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.896765 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.897148 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.904442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.906443 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.909587 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhsn6\" (UniqueName: \"kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6\") pod \"nova-cell1-novncproxy-0\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.909770 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c55v\" (UniqueName: \"kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v\") pod \"nova-metadata-0\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " pod="openstack/nova-metadata-0" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.991063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.991803 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.991959 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.992064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.992211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.992410 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scm48\" (UniqueName: \"kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.992955 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.993039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:13 crc kubenswrapper[4865]: I0103 04:37:13.994406 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.002295 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.004849 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.015410 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scm48\" (UniqueName: \"kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48\") pod \"dnsmasq-dns-845d6d6f59-chpxx\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.101536 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.119095 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.155350 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.364540 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w5zt8"] Jan 03 04:37:14 crc kubenswrapper[4865]: W0103 04:37:14.375666 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fc6ea4_75a9_461a_8828_226e95f04c2e.slice/crio-7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d WatchSource:0}: Error finding container 7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d: Status 404 returned error can't find the container with id 7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.453201 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:14 crc kubenswrapper[4865]: W0103 04:37:14.462158 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92b473a_1335_438f_bafa_61998fbda5fb.slice/crio-069e1e4b08c0513e7270c1a2bdc3c7ed2366bed1d26ae9adc93de243a7868744 WatchSource:0}: Error finding container 069e1e4b08c0513e7270c1a2bdc3c7ed2366bed1d26ae9adc93de243a7868744: Status 404 returned error can't find the container with id 069e1e4b08c0513e7270c1a2bdc3c7ed2366bed1d26ae9adc93de243a7868744 Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.468408 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.678269 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xjsrz"] Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.684480 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.687281 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.687927 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.707434 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.729239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99n98\" (UniqueName: \"kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.729366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.729427 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.729551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.739433 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xjsrz"] Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.831653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99n98\" (UniqueName: \"kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.831712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.831744 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.831792 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.837168 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.837511 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.837920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.842921 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.849402 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99n98\" (UniqueName: \"kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98\") pod \"nova-cell1-conductor-db-sync-xjsrz\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.919714 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:14 crc kubenswrapper[4865]: I0103 04:37:14.939139 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:37:14 crc kubenswrapper[4865]: W0103 04:37:14.943234 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39ff2f3d_2289_4caf_bc71_622ec16c0038.slice/crio-0656a8411bcf78962cbd77c0e6096ce1af8d4697351536e31d158d92b6c46732 WatchSource:0}: Error finding container 0656a8411bcf78962cbd77c0e6096ce1af8d4697351536e31d158d92b6c46732: Status 404 returned error can't find the container with id 0656a8411bcf78962cbd77c0e6096ce1af8d4697351536e31d158d92b6c46732 Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.094756 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.381630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43f7af3a-c204-41f8-a121-b9ab298fffa8","Type":"ContainerStarted","Data":"03fa87a0028cf169cb3afe816fdf7ea1dc1fd8541a281baaabc2ed02079f01b4"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.386517 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f92b473a-1335-438f-bafa-61998fbda5fb","Type":"ContainerStarted","Data":"069e1e4b08c0513e7270c1a2bdc3c7ed2366bed1d26ae9adc93de243a7868744"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.393671 4865 generic.go:334] "Generic (PLEG): container finished" podID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerID="bcf45f0f3aae3336a2cb4aa084ded5083b0bc4275e3302a5bb20ba338fa5b385" exitCode=0 Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.393739 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" event={"ID":"39ff2f3d-2289-4caf-bc71-622ec16c0038","Type":"ContainerDied","Data":"bcf45f0f3aae3336a2cb4aa084ded5083b0bc4275e3302a5bb20ba338fa5b385"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.393768 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" event={"ID":"39ff2f3d-2289-4caf-bc71-622ec16c0038","Type":"ContainerStarted","Data":"0656a8411bcf78962cbd77c0e6096ce1af8d4697351536e31d158d92b6c46732"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.395962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerStarted","Data":"4ffca361d2c5565319452961405162301a5d9d5c5bbed2ba97de78c165d2d993"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.399532 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerStarted","Data":"d3af29601df9897e8ad6e98df088411f571cb43f3be63d62d50e7f1a5d03e49c"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.401035 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w5zt8" event={"ID":"32fc6ea4-75a9-461a-8828-226e95f04c2e","Type":"ContainerStarted","Data":"9f1bcf24558cd967053992df49d42fed51acb41bc9717547cb9662f1d8f29d1c"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.401055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w5zt8" event={"ID":"32fc6ea4-75a9-461a-8828-226e95f04c2e","Type":"ContainerStarted","Data":"7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d"} Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.467517 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-w5zt8" podStartSLOduration=2.467495589 podStartE2EDuration="2.467495589s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:15.444753774 +0000 UTC m=+1262.561806979" watchObservedRunningTime="2026-01-03 04:37:15.467495589 +0000 UTC m=+1262.584548774" Jan 03 04:37:15 crc kubenswrapper[4865]: I0103 04:37:15.685951 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xjsrz"] Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.411287 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" event={"ID":"1a944f13-33eb-4c3c-906f-a091e2bc9655","Type":"ContainerStarted","Data":"e648e330c026c6a1a165e4bbdaefa8b1ead787450b33718056c54a0b7ff6724c"} Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.411670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" event={"ID":"1a944f13-33eb-4c3c-906f-a091e2bc9655","Type":"ContainerStarted","Data":"70a30d1a1bd865f1ff215a4180d63fabe4f67e4ee909d25499d90d17e21e6711"} Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.412751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" event={"ID":"39ff2f3d-2289-4caf-bc71-622ec16c0038","Type":"ContainerStarted","Data":"0950196f014869fb67e25efedf5e727c44313dee654a2e85aec6ec41d5cd85a0"} Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.431958 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" podStartSLOduration=2.431941897 podStartE2EDuration="2.431941897s" podCreationTimestamp="2026-01-03 04:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:16.428812873 +0000 UTC m=+1263.545866058" watchObservedRunningTime="2026-01-03 04:37:16.431941897 +0000 UTC m=+1263.548995082" Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.454314 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" podStartSLOduration=3.454295762 podStartE2EDuration="3.454295762s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:16.453695975 +0000 UTC m=+1263.570749160" watchObservedRunningTime="2026-01-03 04:37:16.454295762 +0000 UTC m=+1263.571348947" Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.791179 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:16 crc kubenswrapper[4865]: I0103 04:37:16.803009 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:17 crc kubenswrapper[4865]: I0103 04:37:17.420304 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.441263 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43f7af3a-c204-41f8-a121-b9ab298fffa8","Type":"ContainerStarted","Data":"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.441410 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="43f7af3a-c204-41f8-a121-b9ab298fffa8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6" gracePeriod=30 Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.443501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f92b473a-1335-438f-bafa-61998fbda5fb","Type":"ContainerStarted","Data":"16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.447417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerStarted","Data":"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.447449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerStarted","Data":"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.447577 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-log" containerID="cri-o://67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" gracePeriod=30 Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.447652 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-metadata" containerID="cri-o://646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" gracePeriod=30 Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.452895 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerStarted","Data":"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.453226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerStarted","Data":"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84"} Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.473876 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.992551911 podStartE2EDuration="6.473852808s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="2026-01-03 04:37:14.928375177 +0000 UTC m=+1262.045428362" lastFinishedPulling="2026-01-03 04:37:18.409676074 +0000 UTC m=+1265.526729259" observedRunningTime="2026-01-03 04:37:19.461282439 +0000 UTC m=+1266.578335634" watchObservedRunningTime="2026-01-03 04:37:19.473852808 +0000 UTC m=+1266.590906013" Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.482621 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.540358899 podStartE2EDuration="6.482596765s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="2026-01-03 04:37:14.468180148 +0000 UTC m=+1261.585233333" lastFinishedPulling="2026-01-03 04:37:18.410418014 +0000 UTC m=+1265.527471199" observedRunningTime="2026-01-03 04:37:19.475315378 +0000 UTC m=+1266.592368563" watchObservedRunningTime="2026-01-03 04:37:19.482596765 +0000 UTC m=+1266.599649990" Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.495533 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.925204911 podStartE2EDuration="6.495512474s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="2026-01-03 04:37:14.847172252 +0000 UTC m=+1261.964225437" lastFinishedPulling="2026-01-03 04:37:18.417479815 +0000 UTC m=+1265.534533000" observedRunningTime="2026-01-03 04:37:19.494512496 +0000 UTC m=+1266.611565681" watchObservedRunningTime="2026-01-03 04:37:19.495512474 +0000 UTC m=+1266.612565669" Jan 03 04:37:19 crc kubenswrapper[4865]: I0103 04:37:19.516800 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.827079708 podStartE2EDuration="6.516777908s" podCreationTimestamp="2026-01-03 04:37:13 +0000 UTC" firstStartedPulling="2026-01-03 04:37:14.719175623 +0000 UTC m=+1261.836228808" lastFinishedPulling="2026-01-03 04:37:18.408873823 +0000 UTC m=+1265.525927008" observedRunningTime="2026-01-03 04:37:19.508146865 +0000 UTC m=+1266.625200050" watchObservedRunningTime="2026-01-03 04:37:19.516777908 +0000 UTC m=+1266.633831093" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.009427 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.170862 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data\") pod \"f73766ea-418e-4542-89e7-5ad257cb42c1\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.171233 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c55v\" (UniqueName: \"kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v\") pod \"f73766ea-418e-4542-89e7-5ad257cb42c1\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.171295 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle\") pod \"f73766ea-418e-4542-89e7-5ad257cb42c1\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.171487 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs\") pod \"f73766ea-418e-4542-89e7-5ad257cb42c1\" (UID: \"f73766ea-418e-4542-89e7-5ad257cb42c1\") " Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.172203 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs" (OuterVolumeSpecName: "logs") pod "f73766ea-418e-4542-89e7-5ad257cb42c1" (UID: "f73766ea-418e-4542-89e7-5ad257cb42c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.183619 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v" (OuterVolumeSpecName: "kube-api-access-6c55v") pod "f73766ea-418e-4542-89e7-5ad257cb42c1" (UID: "f73766ea-418e-4542-89e7-5ad257cb42c1"). InnerVolumeSpecName "kube-api-access-6c55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.208446 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data" (OuterVolumeSpecName: "config-data") pod "f73766ea-418e-4542-89e7-5ad257cb42c1" (UID: "f73766ea-418e-4542-89e7-5ad257cb42c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.211488 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f73766ea-418e-4542-89e7-5ad257cb42c1" (UID: "f73766ea-418e-4542-89e7-5ad257cb42c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.274321 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c55v\" (UniqueName: \"kubernetes.io/projected/f73766ea-418e-4542-89e7-5ad257cb42c1-kube-api-access-6c55v\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.274411 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.274433 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f73766ea-418e-4542-89e7-5ad257cb42c1-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.274453 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73766ea-418e-4542-89e7-5ad257cb42c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.463867 4865 generic.go:334] "Generic (PLEG): container finished" podID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerID="646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" exitCode=0 Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.464190 4865 generic.go:334] "Generic (PLEG): container finished" podID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerID="67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" exitCode=143 Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.463955 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.463967 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerDied","Data":"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966"} Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.464287 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerDied","Data":"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283"} Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.464311 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f73766ea-418e-4542-89e7-5ad257cb42c1","Type":"ContainerDied","Data":"d3af29601df9897e8ad6e98df088411f571cb43f3be63d62d50e7f1a5d03e49c"} Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.464314 4865 scope.go:117] "RemoveContainer" containerID="646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.501442 4865 scope.go:117] "RemoveContainer" containerID="67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.547884 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.562039 4865 scope.go:117] "RemoveContainer" containerID="646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" Jan 03 04:37:20 crc kubenswrapper[4865]: E0103 04:37:20.562566 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966\": container with ID starting with 646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966 not found: ID does not exist" containerID="646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.562611 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966"} err="failed to get container status \"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966\": rpc error: code = NotFound desc = could not find container \"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966\": container with ID starting with 646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966 not found: ID does not exist" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.562642 4865 scope.go:117] "RemoveContainer" containerID="67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" Jan 03 04:37:20 crc kubenswrapper[4865]: E0103 04:37:20.563002 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283\": container with ID starting with 67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283 not found: ID does not exist" containerID="67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.563034 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283"} err="failed to get container status \"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283\": rpc error: code = NotFound desc = could not find container \"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283\": container with ID starting with 67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283 not found: ID does not exist" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.563051 4865 scope.go:117] "RemoveContainer" containerID="646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.563231 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966"} err="failed to get container status \"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966\": rpc error: code = NotFound desc = could not find container \"646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966\": container with ID starting with 646a7f39809da8bdc33384ac82f7f3a90a98cae9ea599ad5afc7a4bd09be1966 not found: ID does not exist" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.563255 4865 scope.go:117] "RemoveContainer" containerID="67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.563436 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283"} err="failed to get container status \"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283\": rpc error: code = NotFound desc = could not find container \"67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283\": container with ID starting with 67ac25f38db28ecc0ff4a8f88a1b3cc9a1bf6437501d3d510ea2671a64542283 not found: ID does not exist" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.572591 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.577554 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:20 crc kubenswrapper[4865]: E0103 04:37:20.578012 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-metadata" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.578033 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-metadata" Jan 03 04:37:20 crc kubenswrapper[4865]: E0103 04:37:20.578060 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-log" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.578068 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-log" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.578300 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-metadata" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.578333 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" containerName="nova-metadata-log" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.579495 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.582254 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.582535 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.600149 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.684143 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kplk8\" (UniqueName: \"kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.684185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.684245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.684305 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.684455 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.786249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kplk8\" (UniqueName: \"kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.786293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.786361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.786426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.786488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.787051 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.790866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.791175 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.806536 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.822572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kplk8\" (UniqueName: \"kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8\") pod \"nova-metadata-0\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " pod="openstack/nova-metadata-0" Jan 03 04:37:20 crc kubenswrapper[4865]: I0103 04:37:20.899218 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:21 crc kubenswrapper[4865]: I0103 04:37:21.192265 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73766ea-418e-4542-89e7-5ad257cb42c1" path="/var/lib/kubelet/pods/f73766ea-418e-4542-89e7-5ad257cb42c1/volumes" Jan 03 04:37:21 crc kubenswrapper[4865]: I0103 04:37:21.423807 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:21 crc kubenswrapper[4865]: I0103 04:37:21.477739 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerStarted","Data":"03e0588d306b5002081fee4e7aeeecd236406cfa1a886134ef6e692506edcbe3"} Jan 03 04:37:22 crc kubenswrapper[4865]: I0103 04:37:22.496431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerStarted","Data":"c450ffe11c08e22e2a4214d6c4ea282a08969c52cc0e08075cddb37be6d5ea04"} Jan 03 04:37:22 crc kubenswrapper[4865]: I0103 04:37:22.496806 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerStarted","Data":"a4254f00ceec05e941686b9f1581fe6de1fcce94b31d3756ae196f00acf741fd"} Jan 03 04:37:22 crc kubenswrapper[4865]: I0103 04:37:22.502058 4865 generic.go:334] "Generic (PLEG): container finished" podID="32fc6ea4-75a9-461a-8828-226e95f04c2e" containerID="9f1bcf24558cd967053992df49d42fed51acb41bc9717547cb9662f1d8f29d1c" exitCode=0 Jan 03 04:37:22 crc kubenswrapper[4865]: I0103 04:37:22.502115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w5zt8" event={"ID":"32fc6ea4-75a9-461a-8828-226e95f04c2e","Type":"ContainerDied","Data":"9f1bcf24558cd967053992df49d42fed51acb41bc9717547cb9662f1d8f29d1c"} Jan 03 04:37:22 crc kubenswrapper[4865]: I0103 04:37:22.528739 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.528715859 podStartE2EDuration="2.528715859s" podCreationTimestamp="2026-01-03 04:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:22.526345026 +0000 UTC m=+1269.643398231" watchObservedRunningTime="2026-01-03 04:37:22.528715859 +0000 UTC m=+1269.645769054" Jan 03 04:37:23 crc kubenswrapper[4865]: I0103 04:37:23.878407 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 04:37:23 crc kubenswrapper[4865]: I0103 04:37:23.878708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 04:37:23 crc kubenswrapper[4865]: I0103 04:37:23.897636 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:37:23 crc kubenswrapper[4865]: I0103 04:37:23.897682 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:37:23 crc kubenswrapper[4865]: I0103 04:37:23.916515 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.012788 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.097169 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data\") pod \"32fc6ea4-75a9-461a-8828-226e95f04c2e\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.097208 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts\") pod \"32fc6ea4-75a9-461a-8828-226e95f04c2e\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.097326 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvsrs\" (UniqueName: \"kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs\") pod \"32fc6ea4-75a9-461a-8828-226e95f04c2e\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.097408 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle\") pod \"32fc6ea4-75a9-461a-8828-226e95f04c2e\" (UID: \"32fc6ea4-75a9-461a-8828-226e95f04c2e\") " Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.103153 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs" (OuterVolumeSpecName: "kube-api-access-bvsrs") pod "32fc6ea4-75a9-461a-8828-226e95f04c2e" (UID: "32fc6ea4-75a9-461a-8828-226e95f04c2e"). InnerVolumeSpecName "kube-api-access-bvsrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.103352 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts" (OuterVolumeSpecName: "scripts") pod "32fc6ea4-75a9-461a-8828-226e95f04c2e" (UID: "32fc6ea4-75a9-461a-8828-226e95f04c2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.120304 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.155135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32fc6ea4-75a9-461a-8828-226e95f04c2e" (UID: "32fc6ea4-75a9-461a-8828-226e95f04c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.156112 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data" (OuterVolumeSpecName: "config-data") pod "32fc6ea4-75a9-461a-8828-226e95f04c2e" (UID: "32fc6ea4-75a9-461a-8828-226e95f04c2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.156530 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.199193 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.199228 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.199238 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc6ea4-75a9-461a-8828-226e95f04c2e-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.199247 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvsrs\" (UniqueName: \"kubernetes.io/projected/32fc6ea4-75a9-461a-8828-226e95f04c2e-kube-api-access-bvsrs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.225892 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.226255 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="dnsmasq-dns" containerID="cri-o://8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811" gracePeriod=10 Jan 03 04:37:24 crc kubenswrapper[4865]: E0103 04:37:24.285937 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f18b027_fcad_4bf3_87e1_958c0672c8e5.slice/crio-8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811.scope\": RecentStats: unable to find data in memory cache]" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.522593 4865 generic.go:334] "Generic (PLEG): container finished" podID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerID="8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811" exitCode=0 Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.522672 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" event={"ID":"5f18b027-fcad-4bf3-87e1-958c0672c8e5","Type":"ContainerDied","Data":"8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811"} Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.524792 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w5zt8" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.525766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w5zt8" event={"ID":"32fc6ea4-75a9-461a-8828-226e95f04c2e","Type":"ContainerDied","Data":"7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d"} Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.525802 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6795122add4d9607a0f0447f2f712dfd42fff1f4f3e1ef24647abbd7d2305d" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.563612 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.735188 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.735794 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-log" containerID="cri-o://4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84" gracePeriod=30 Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.735890 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-api" containerID="cri-o://c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb" gracePeriod=30 Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.743138 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.743226 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.793646 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.793858 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-log" containerID="cri-o://a4254f00ceec05e941686b9f1581fe6de1fcce94b31d3756ae196f00acf741fd" gracePeriod=30 Jan 03 04:37:24 crc kubenswrapper[4865]: I0103 04:37:24.793921 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-metadata" containerID="cri-o://c450ffe11c08e22e2a4214d6c4ea282a08969c52cc0e08075cddb37be6d5ea04" gracePeriod=30 Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.091367 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.538208 4865 generic.go:334] "Generic (PLEG): container finished" podID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerID="c450ffe11c08e22e2a4214d6c4ea282a08969c52cc0e08075cddb37be6d5ea04" exitCode=0 Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.538601 4865 generic.go:334] "Generic (PLEG): container finished" podID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerID="a4254f00ceec05e941686b9f1581fe6de1fcce94b31d3756ae196f00acf741fd" exitCode=143 Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.538292 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerDied","Data":"c450ffe11c08e22e2a4214d6c4ea282a08969c52cc0e08075cddb37be6d5ea04"} Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.538648 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerDied","Data":"a4254f00ceec05e941686b9f1581fe6de1fcce94b31d3756ae196f00acf741fd"} Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.541043 4865 generic.go:334] "Generic (PLEG): container finished" podID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerID="4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84" exitCode=143 Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.541097 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerDied","Data":"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84"} Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.635755 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727365 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727526 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqrd\" (UniqueName: \"kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727769 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.727969 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config\") pod \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\" (UID: \"5f18b027-fcad-4bf3-87e1-958c0672c8e5\") " Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.743743 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd" (OuterVolumeSpecName: "kube-api-access-nhqrd") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "kube-api-access-nhqrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.795224 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config" (OuterVolumeSpecName: "config") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.813217 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.818568 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.822079 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.830509 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.830536 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.830546 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhqrd\" (UniqueName: \"kubernetes.io/projected/5f18b027-fcad-4bf3-87e1-958c0672c8e5-kube-api-access-nhqrd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.830554 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.830563 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.834452 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f18b027-fcad-4bf3-87e1-958c0672c8e5" (UID: "5f18b027-fcad-4bf3-87e1-958c0672c8e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.847534 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.912730 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.912791 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:37:25 crc kubenswrapper[4865]: I0103 04:37:25.932171 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f18b027-fcad-4bf3-87e1-958c0672c8e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.155024 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237080 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kplk8\" (UniqueName: \"kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8\") pod \"3652533b-7f39-4766-b17a-cb1b22c6bc79\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237436 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data\") pod \"3652533b-7f39-4766-b17a-cb1b22c6bc79\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237512 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle\") pod \"3652533b-7f39-4766-b17a-cb1b22c6bc79\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237573 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs\") pod \"3652533b-7f39-4766-b17a-cb1b22c6bc79\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs\") pod \"3652533b-7f39-4766-b17a-cb1b22c6bc79\" (UID: \"3652533b-7f39-4766-b17a-cb1b22c6bc79\") " Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.237869 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs" (OuterVolumeSpecName: "logs") pod "3652533b-7f39-4766-b17a-cb1b22c6bc79" (UID: "3652533b-7f39-4766-b17a-cb1b22c6bc79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.238805 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3652533b-7f39-4766-b17a-cb1b22c6bc79-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.252535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8" (OuterVolumeSpecName: "kube-api-access-kplk8") pod "3652533b-7f39-4766-b17a-cb1b22c6bc79" (UID: "3652533b-7f39-4766-b17a-cb1b22c6bc79"). InnerVolumeSpecName "kube-api-access-kplk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.269150 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data" (OuterVolumeSpecName: "config-data") pod "3652533b-7f39-4766-b17a-cb1b22c6bc79" (UID: "3652533b-7f39-4766-b17a-cb1b22c6bc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.283388 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3652533b-7f39-4766-b17a-cb1b22c6bc79" (UID: "3652533b-7f39-4766-b17a-cb1b22c6bc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.288436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3652533b-7f39-4766-b17a-cb1b22c6bc79" (UID: "3652533b-7f39-4766-b17a-cb1b22c6bc79"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.340720 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kplk8\" (UniqueName: \"kubernetes.io/projected/3652533b-7f39-4766-b17a-cb1b22c6bc79-kube-api-access-kplk8\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.340752 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.340761 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.340769 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3652533b-7f39-4766-b17a-cb1b22c6bc79-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.551360 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.551652 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-g5rpq" event={"ID":"5f18b027-fcad-4bf3-87e1-958c0672c8e5","Type":"ContainerDied","Data":"292a2697f5211ff0f26bfb44c76a948b7d0e86bc60b294b2bcf47e0ea986f1c0"} Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.551741 4865 scope.go:117] "RemoveContainer" containerID="8f740d6baa5c00348af3a7a28850b7a37ae48226ec0a675bef54db9b35bcf811" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.553529 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3652533b-7f39-4766-b17a-cb1b22c6bc79","Type":"ContainerDied","Data":"03e0588d306b5002081fee4e7aeeecd236406cfa1a886134ef6e692506edcbe3"} Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.553596 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" containerName="nova-scheduler-scheduler" containerID="cri-o://16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" gracePeriod=30 Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.553650 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.575092 4865 scope.go:117] "RemoveContainer" containerID="37cf9cce850081fb582dd3e5eed295d0b1eeecda7e2fe4b52399c1ddf8514a14" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.609973 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.617247 4865 scope.go:117] "RemoveContainer" containerID="c450ffe11c08e22e2a4214d6c4ea282a08969c52cc0e08075cddb37be6d5ea04" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.628822 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.656962 4865 scope.go:117] "RemoveContainer" containerID="a4254f00ceec05e941686b9f1581fe6de1fcce94b31d3756ae196f00acf741fd" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.694736 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.696890 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:26 crc kubenswrapper[4865]: E0103 04:37:26.697532 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="dnsmasq-dns" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="dnsmasq-dns" Jan 03 04:37:26 crc kubenswrapper[4865]: E0103 04:37:26.697569 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-log" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697575 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-log" Jan 03 04:37:26 crc kubenswrapper[4865]: E0103 04:37:26.697591 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fc6ea4-75a9-461a-8828-226e95f04c2e" containerName="nova-manage" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697596 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fc6ea4-75a9-461a-8828-226e95f04c2e" containerName="nova-manage" Jan 03 04:37:26 crc kubenswrapper[4865]: E0103 04:37:26.697631 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-metadata" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697637 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-metadata" Jan 03 04:37:26 crc kubenswrapper[4865]: E0103 04:37:26.697652 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="init" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697657 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="init" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697937 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fc6ea4-75a9-461a-8828-226e95f04c2e" containerName="nova-manage" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697961 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-log" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.697978 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" containerName="dnsmasq-dns" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.698003 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" containerName="nova-metadata-metadata" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.717988 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.718771 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-g5rpq"] Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.720347 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.720473 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.733519 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.760459 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.760513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptm2d\" (UniqueName: \"kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.760538 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.760560 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.760586 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.861909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.861972 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptm2d\" (UniqueName: \"kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.861995 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.862026 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.862050 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.863187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.866077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.867773 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.868634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:26 crc kubenswrapper[4865]: I0103 04:37:26.878990 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptm2d\" (UniqueName: \"kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d\") pod \"nova-metadata-0\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " pod="openstack/nova-metadata-0" Jan 03 04:37:27 crc kubenswrapper[4865]: I0103 04:37:27.032417 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:37:27 crc kubenswrapper[4865]: I0103 04:37:27.178317 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3652533b-7f39-4766-b17a-cb1b22c6bc79" path="/var/lib/kubelet/pods/3652533b-7f39-4766-b17a-cb1b22c6bc79/volumes" Jan 03 04:37:27 crc kubenswrapper[4865]: I0103 04:37:27.179037 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f18b027-fcad-4bf3-87e1-958c0672c8e5" path="/var/lib/kubelet/pods/5f18b027-fcad-4bf3-87e1-958c0672c8e5/volumes" Jan 03 04:37:27 crc kubenswrapper[4865]: I0103 04:37:27.535225 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:37:27 crc kubenswrapper[4865]: I0103 04:37:27.563798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerStarted","Data":"6e4a1448b0712448658041d5189905884f60dbc27c3b5339578eb27e4e1b73d9"} Jan 03 04:37:28 crc kubenswrapper[4865]: I0103 04:37:28.579026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerStarted","Data":"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe"} Jan 03 04:37:28 crc kubenswrapper[4865]: I0103 04:37:28.579260 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerStarted","Data":"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31"} Jan 03 04:37:28 crc kubenswrapper[4865]: I0103 04:37:28.599726 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.599708014 podStartE2EDuration="2.599708014s" podCreationTimestamp="2026-01-03 04:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:28.594293327 +0000 UTC m=+1275.711346522" watchObservedRunningTime="2026-01-03 04:37:28.599708014 +0000 UTC m=+1275.716761219" Jan 03 04:37:28 crc kubenswrapper[4865]: E0103 04:37:28.881493 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:37:28 crc kubenswrapper[4865]: E0103 04:37:28.883059 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:37:28 crc kubenswrapper[4865]: E0103 04:37:28.885601 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:37:28 crc kubenswrapper[4865]: E0103 04:37:28.885641 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" containerName="nova-scheduler-scheduler" Jan 03 04:37:29 crc kubenswrapper[4865]: I0103 04:37:29.591829 4865 generic.go:334] "Generic (PLEG): container finished" podID="1a944f13-33eb-4c3c-906f-a091e2bc9655" containerID="e648e330c026c6a1a165e4bbdaefa8b1ead787450b33718056c54a0b7ff6724c" exitCode=0 Jan 03 04:37:29 crc kubenswrapper[4865]: I0103 04:37:29.591955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" event={"ID":"1a944f13-33eb-4c3c-906f-a091e2bc9655","Type":"ContainerDied","Data":"e648e330c026c6a1a165e4bbdaefa8b1ead787450b33718056c54a0b7ff6724c"} Jan 03 04:37:29 crc kubenswrapper[4865]: I0103 04:37:29.691663 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:29 crc kubenswrapper[4865]: I0103 04:37:29.691942 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" containerName="kube-state-metrics" containerID="cri-o://4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398" gracePeriod=30 Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.463844 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.528354 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95rc\" (UniqueName: \"kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc\") pod \"3c06d4fd-0e97-401a-a450-92e7e1c22131\" (UID: \"3c06d4fd-0e97-401a-a450-92e7e1c22131\") " Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.533569 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc" (OuterVolumeSpecName: "kube-api-access-z95rc") pod "3c06d4fd-0e97-401a-a450-92e7e1c22131" (UID: "3c06d4fd-0e97-401a-a450-92e7e1c22131"). InnerVolumeSpecName "kube-api-access-z95rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.604512 4865 generic.go:334] "Generic (PLEG): container finished" podID="3c06d4fd-0e97-401a-a450-92e7e1c22131" containerID="4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398" exitCode=2 Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.604531 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.604548 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c06d4fd-0e97-401a-a450-92e7e1c22131","Type":"ContainerDied","Data":"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398"} Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.605305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c06d4fd-0e97-401a-a450-92e7e1c22131","Type":"ContainerDied","Data":"e0b15078b7f157c77ce52769bb9990e0922d5c79cbbf35562c0a796d6a0c8242"} Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.605343 4865 scope.go:117] "RemoveContainer" containerID="4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.640408 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95rc\" (UniqueName: \"kubernetes.io/projected/3c06d4fd-0e97-401a-a450-92e7e1c22131-kube-api-access-z95rc\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.657727 4865 generic.go:334] "Generic (PLEG): container finished" podID="f92b473a-1335-438f-bafa-61998fbda5fb" containerID="16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" exitCode=0 Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.657897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f92b473a-1335-438f-bafa-61998fbda5fb","Type":"ContainerDied","Data":"16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78"} Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.680876 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.691305 4865 scope.go:117] "RemoveContainer" containerID="4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398" Jan 03 04:37:30 crc kubenswrapper[4865]: E0103 04:37:30.692316 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398\": container with ID starting with 4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398 not found: ID does not exist" containerID="4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.692433 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398"} err="failed to get container status \"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398\": rpc error: code = NotFound desc = could not find container \"4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398\": container with ID starting with 4724c540a3155faf5906da2136d40648cef01b3a001ebdabd6990ec7c9cf2398 not found: ID does not exist" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.709191 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.717371 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.724677 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:30 crc kubenswrapper[4865]: E0103 04:37:30.725155 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" containerName="nova-scheduler-scheduler" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.727635 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" containerName="nova-scheduler-scheduler" Jan 03 04:37:30 crc kubenswrapper[4865]: E0103 04:37:30.727693 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" containerName="kube-state-metrics" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.727701 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" containerName="kube-state-metrics" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.728081 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" containerName="kube-state-metrics" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.728102 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" containerName="nova-scheduler-scheduler" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.728760 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.731702 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.732071 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.751141 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843154 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data\") pod \"f92b473a-1335-438f-bafa-61998fbda5fb\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843274 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2gbj\" (UniqueName: \"kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj\") pod \"f92b473a-1335-438f-bafa-61998fbda5fb\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843452 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle\") pod \"f92b473a-1335-438f-bafa-61998fbda5fb\" (UID: \"f92b473a-1335-438f-bafa-61998fbda5fb\") " Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843739 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843760 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.843889 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgwm\" (UniqueName: \"kubernetes.io/projected/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-api-access-6xgwm\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.847255 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj" (OuterVolumeSpecName: "kube-api-access-w2gbj") pod "f92b473a-1335-438f-bafa-61998fbda5fb" (UID: "f92b473a-1335-438f-bafa-61998fbda5fb"). InnerVolumeSpecName "kube-api-access-w2gbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.874404 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f92b473a-1335-438f-bafa-61998fbda5fb" (UID: "f92b473a-1335-438f-bafa-61998fbda5fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.890057 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data" (OuterVolumeSpecName: "config-data") pod "f92b473a-1335-438f-bafa-61998fbda5fb" (UID: "f92b473a-1335-438f-bafa-61998fbda5fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.945751 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.945817 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.945845 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.945890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgwm\" (UniqueName: \"kubernetes.io/projected/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-api-access-6xgwm\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.946004 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.946017 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2gbj\" (UniqueName: \"kubernetes.io/projected/f92b473a-1335-438f-bafa-61998fbda5fb-kube-api-access-w2gbj\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.946030 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92b473a-1335-438f-bafa-61998fbda5fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.955069 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.957457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.957570 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:30 crc kubenswrapper[4865]: I0103 04:37:30.974138 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgwm\" (UniqueName: \"kubernetes.io/projected/5668e757-efa2-4bac-a269-6e2cdd9dbfef-kube-api-access-6xgwm\") pod \"kube-state-metrics-0\" (UID: \"5668e757-efa2-4bac-a269-6e2cdd9dbfef\") " pod="openstack/kube-state-metrics-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.010395 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.047037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle\") pod \"1a944f13-33eb-4c3c-906f-a091e2bc9655\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.047143 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99n98\" (UniqueName: \"kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98\") pod \"1a944f13-33eb-4c3c-906f-a091e2bc9655\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.047195 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data\") pod \"1a944f13-33eb-4c3c-906f-a091e2bc9655\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.047397 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts\") pod \"1a944f13-33eb-4c3c-906f-a091e2bc9655\" (UID: \"1a944f13-33eb-4c3c-906f-a091e2bc9655\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.050779 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts" (OuterVolumeSpecName: "scripts") pod "1a944f13-33eb-4c3c-906f-a091e2bc9655" (UID: "1a944f13-33eb-4c3c-906f-a091e2bc9655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.052560 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98" (OuterVolumeSpecName: "kube-api-access-99n98") pod "1a944f13-33eb-4c3c-906f-a091e2bc9655" (UID: "1a944f13-33eb-4c3c-906f-a091e2bc9655"). InnerVolumeSpecName "kube-api-access-99n98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.052697 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.086436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a944f13-33eb-4c3c-906f-a091e2bc9655" (UID: "1a944f13-33eb-4c3c-906f-a091e2bc9655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.106453 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data" (OuterVolumeSpecName: "config-data") pod "1a944f13-33eb-4c3c-906f-a091e2bc9655" (UID: "1a944f13-33eb-4c3c-906f-a091e2bc9655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.149108 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.149145 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99n98\" (UniqueName: \"kubernetes.io/projected/1a944f13-33eb-4c3c-906f-a091e2bc9655-kube-api-access-99n98\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.149159 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.149171 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a944f13-33eb-4c3c-906f-a091e2bc9655-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.176785 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c06d4fd-0e97-401a-a450-92e7e1c22131" path="/var/lib/kubelet/pods/3c06d4fd-0e97-401a-a450-92e7e1c22131/volumes" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.514828 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.554929 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77hj\" (UniqueName: \"kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj\") pod \"59e88785-c8fd-48d3-8f2b-1280ce82c255\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.555164 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs\") pod \"59e88785-c8fd-48d3-8f2b-1280ce82c255\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.555276 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle\") pod \"59e88785-c8fd-48d3-8f2b-1280ce82c255\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.555328 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data\") pod \"59e88785-c8fd-48d3-8f2b-1280ce82c255\" (UID: \"59e88785-c8fd-48d3-8f2b-1280ce82c255\") " Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.556092 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs" (OuterVolumeSpecName: "logs") pod "59e88785-c8fd-48d3-8f2b-1280ce82c255" (UID: "59e88785-c8fd-48d3-8f2b-1280ce82c255"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.557037 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: W0103 04:37:31.558042 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5668e757_efa2_4bac_a269_6e2cdd9dbfef.slice/crio-c243e6719914cf16ef62df6081ba2dfbc84b96891f4363dfaad5e2e76d1eee17 WatchSource:0}: Error finding container c243e6719914cf16ef62df6081ba2dfbc84b96891f4363dfaad5e2e76d1eee17: Status 404 returned error can't find the container with id c243e6719914cf16ef62df6081ba2dfbc84b96891f4363dfaad5e2e76d1eee17 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.560643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj" (OuterVolumeSpecName: "kube-api-access-h77hj") pod "59e88785-c8fd-48d3-8f2b-1280ce82c255" (UID: "59e88785-c8fd-48d3-8f2b-1280ce82c255"). InnerVolumeSpecName "kube-api-access-h77hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.580694 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data" (OuterVolumeSpecName: "config-data") pod "59e88785-c8fd-48d3-8f2b-1280ce82c255" (UID: "59e88785-c8fd-48d3-8f2b-1280ce82c255"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.598308 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59e88785-c8fd-48d3-8f2b-1280ce82c255" (UID: "59e88785-c8fd-48d3-8f2b-1280ce82c255"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.658150 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.658189 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e88785-c8fd-48d3-8f2b-1280ce82c255-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.658201 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77hj\" (UniqueName: \"kubernetes.io/projected/59e88785-c8fd-48d3-8f2b-1280ce82c255-kube-api-access-h77hj\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.658214 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e88785-c8fd-48d3-8f2b-1280ce82c255-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.677098 4865 generic.go:334] "Generic (PLEG): container finished" podID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerID="c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb" exitCode=0 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.677235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerDied","Data":"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb"} Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.677239 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.677694 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59e88785-c8fd-48d3-8f2b-1280ce82c255","Type":"ContainerDied","Data":"4ffca361d2c5565319452961405162301a5d9d5c5bbed2ba97de78c165d2d993"} Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.677730 4865 scope.go:117] "RemoveContainer" containerID="c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.682897 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: E0103 04:37:31.683456 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a944f13-33eb-4c3c-906f-a091e2bc9655" containerName="nova-cell1-conductor-db-sync" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683488 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a944f13-33eb-4c3c-906f-a091e2bc9655" containerName="nova-cell1-conductor-db-sync" Jan 03 04:37:31 crc kubenswrapper[4865]: E0103 04:37:31.683525 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-api" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683538 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-api" Jan 03 04:37:31 crc kubenswrapper[4865]: E0103 04:37:31.683575 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-log" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683588 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-log" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683866 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a944f13-33eb-4c3c-906f-a091e2bc9655" containerName="nova-cell1-conductor-db-sync" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683907 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-api" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.683936 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" containerName="nova-api-log" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.684915 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.695306 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.695395 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f92b473a-1335-438f-bafa-61998fbda5fb","Type":"ContainerDied","Data":"069e1e4b08c0513e7270c1a2bdc3c7ed2366bed1d26ae9adc93de243a7868744"} Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.697430 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5668e757-efa2-4bac-a269-6e2cdd9dbfef","Type":"ContainerStarted","Data":"c243e6719914cf16ef62df6081ba2dfbc84b96891f4363dfaad5e2e76d1eee17"} Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.699830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" event={"ID":"1a944f13-33eb-4c3c-906f-a091e2bc9655","Type":"ContainerDied","Data":"70a30d1a1bd865f1ff215a4180d63fabe4f67e4ee909d25499d90d17e21e6711"} Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.699867 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a30d1a1bd865f1ff215a4180d63fabe4f67e4ee909d25499d90d17e21e6711" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.699927 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xjsrz" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.704682 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.718143 4865 scope.go:117] "RemoveContainer" containerID="4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.742197 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.756561 4865 scope.go:117] "RemoveContainer" containerID="c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.757076 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: E0103 04:37:31.757109 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb\": container with ID starting with c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb not found: ID does not exist" containerID="c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.757152 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb"} err="failed to get container status \"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb\": rpc error: code = NotFound desc = could not find container \"c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb\": container with ID starting with c76858786981377c5077494d1014f9e4870356ada41d431be42472561f6e29eb not found: ID does not exist" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.757188 4865 scope.go:117] "RemoveContainer" containerID="4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84" Jan 03 04:37:31 crc kubenswrapper[4865]: E0103 04:37:31.757635 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84\": container with ID starting with 4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84 not found: ID does not exist" containerID="4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.757659 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84"} err="failed to get container status \"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84\": rpc error: code = NotFound desc = could not find container \"4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84\": container with ID starting with 4ce5ea0a84a8086dd4b0687c260f01cfc4e910c1b3318ecfd0aa3461abd03d84 not found: ID does not exist" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.757675 4865 scope.go:117] "RemoveContainer" containerID="16d58b18927a9b8238d937aa0e0b5a3a0ad1ae3f0653dd8e7f02c6e786b08d78" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.759322 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.759374 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsn9\" (UniqueName: \"kubernetes.io/projected/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-kube-api-access-vvsn9\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.759437 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.769919 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.777984 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.780048 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.788266 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.791349 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.795067 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.806813 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.808290 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.811797 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.818274 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.865416 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.865473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.865604 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.865655 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzvx\" (UniqueName: \"kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.866200 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.866538 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.869301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.869390 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2kp\" (UniqueName: \"kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.869549 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.869593 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsn9\" (UniqueName: \"kubernetes.io/projected/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-kube-api-access-vvsn9\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.884740 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.885043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.893527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsn9\" (UniqueName: \"kubernetes.io/projected/0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0-kube-api-access-vvsn9\") pod \"nova-cell1-conductor-0\" (UID: \"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0\") " pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.898622 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.898943 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-central-agent" containerID="cri-o://922e1a99cf62121b5c9cba4f55e20d0cd64238d995a35e69647babd5463eaf49" gracePeriod=30 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.899246 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="proxy-httpd" containerID="cri-o://252b0b9da1213b561e3d4a97786619216a806e9eb63c7c756e0cbc2141a06093" gracePeriod=30 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.899304 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="sg-core" containerID="cri-o://af63e41a4cf78979d44eb4d2bf375fd491cbe0fc52cc057711d66aa381c1b922" gracePeriod=30 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.899340 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-notification-agent" containerID="cri-o://984df135619470d8f883e0c3fd5e174aab8ad0c948074fabf2392b9e582aaec9" gracePeriod=30 Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.971762 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972254 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972294 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzvx\" (UniqueName: \"kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972362 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972408 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.972485 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2kp\" (UniqueName: \"kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.975414 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.979885 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.979998 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.982641 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.987057 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.991796 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2kp\" (UniqueName: \"kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp\") pod \"nova-api-0\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " pod="openstack/nova-api-0" Jan 03 04:37:31 crc kubenswrapper[4865]: I0103 04:37:31.995537 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzvx\" (UniqueName: \"kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx\") pod \"nova-scheduler-0\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " pod="openstack/nova-scheduler-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.022755 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.032804 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.034163 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.091058 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.101888 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.512514 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 03 04:37:32 crc kubenswrapper[4865]: W0103 04:37:32.524278 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9ddcb0_e282_46fd_8d38_f5ec2ce4aeb0.slice/crio-745bbb7292f9162a9f0d7936646108159c0b6789ae5a4680add6b547d05f9f60 WatchSource:0}: Error finding container 745bbb7292f9162a9f0d7936646108159c0b6789ae5a4680add6b547d05f9f60: Status 404 returned error can't find the container with id 745bbb7292f9162a9f0d7936646108159c0b6789ae5a4680add6b547d05f9f60 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.602056 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:37:32 crc kubenswrapper[4865]: W0103 04:37:32.606332 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod711b4fc2_6564_4370_8c37_3a7350b69e6b.slice/crio-0fa59a05a5a7fd82383fe35c35cab70b0120290c73cd147b7871c34019ed8071 WatchSource:0}: Error finding container 0fa59a05a5a7fd82383fe35c35cab70b0120290c73cd147b7871c34019ed8071: Status 404 returned error can't find the container with id 0fa59a05a5a7fd82383fe35c35cab70b0120290c73cd147b7871c34019ed8071 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.668482 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:32 crc kubenswrapper[4865]: W0103 04:37:32.674354 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ef04f6_9d9d_49cf_b2de_f890e43b8035.slice/crio-5d868cecfd77231618ec08dd87c488a419ec1fe5c22db8078cfce8e33313f571 WatchSource:0}: Error finding container 5d868cecfd77231618ec08dd87c488a419ec1fe5c22db8078cfce8e33313f571: Status 404 returned error can't find the container with id 5d868cecfd77231618ec08dd87c488a419ec1fe5c22db8078cfce8e33313f571 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.715984 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5668e757-efa2-4bac-a269-6e2cdd9dbfef","Type":"ContainerStarted","Data":"6244c8c3d3b6186cc616507200c5a2676c5beeb6fadc28e004a0eeea62d2b838"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.717012 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720447 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecdb1d03-2f79-474a-a420-69d683351240" containerID="252b0b9da1213b561e3d4a97786619216a806e9eb63c7c756e0cbc2141a06093" exitCode=0 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720477 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecdb1d03-2f79-474a-a420-69d683351240" containerID="af63e41a4cf78979d44eb4d2bf375fd491cbe0fc52cc057711d66aa381c1b922" exitCode=2 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720486 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecdb1d03-2f79-474a-a420-69d683351240" containerID="922e1a99cf62121b5c9cba4f55e20d0cd64238d995a35e69647babd5463eaf49" exitCode=0 Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720512 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerDied","Data":"252b0b9da1213b561e3d4a97786619216a806e9eb63c7c756e0cbc2141a06093"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720556 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerDied","Data":"af63e41a4cf78979d44eb4d2bf375fd491cbe0fc52cc057711d66aa381c1b922"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.720570 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerDied","Data":"922e1a99cf62121b5c9cba4f55e20d0cd64238d995a35e69647babd5463eaf49"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.721783 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0","Type":"ContainerStarted","Data":"745bbb7292f9162a9f0d7936646108159c0b6789ae5a4680add6b547d05f9f60"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.724212 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"711b4fc2-6564-4370-8c37-3a7350b69e6b","Type":"ContainerStarted","Data":"0fa59a05a5a7fd82383fe35c35cab70b0120290c73cd147b7871c34019ed8071"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.725366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerStarted","Data":"5d868cecfd77231618ec08dd87c488a419ec1fe5c22db8078cfce8e33313f571"} Jan 03 04:37:32 crc kubenswrapper[4865]: I0103 04:37:32.737840 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.370567479 podStartE2EDuration="2.737825735s" podCreationTimestamp="2026-01-03 04:37:30 +0000 UTC" firstStartedPulling="2026-01-03 04:37:31.56076648 +0000 UTC m=+1278.677819675" lastFinishedPulling="2026-01-03 04:37:31.928024746 +0000 UTC m=+1279.045077931" observedRunningTime="2026-01-03 04:37:32.732754168 +0000 UTC m=+1279.849807353" watchObservedRunningTime="2026-01-03 04:37:32.737825735 +0000 UTC m=+1279.854878920" Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.170631 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e88785-c8fd-48d3-8f2b-1280ce82c255" path="/var/lib/kubelet/pods/59e88785-c8fd-48d3-8f2b-1280ce82c255/volumes" Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.172247 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92b473a-1335-438f-bafa-61998fbda5fb" path="/var/lib/kubelet/pods/f92b473a-1335-438f-bafa-61998fbda5fb/volumes" Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.756253 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecdb1d03-2f79-474a-a420-69d683351240" containerID="984df135619470d8f883e0c3fd5e174aab8ad0c948074fabf2392b9e582aaec9" exitCode=0 Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.756327 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerDied","Data":"984df135619470d8f883e0c3fd5e174aab8ad0c948074fabf2392b9e582aaec9"} Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.759676 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0","Type":"ContainerStarted","Data":"f4c0aab72505f7c330238eb7d6c9c861bd31ff83b9ccaa8707cd6f00a91fce2b"} Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.762442 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.776813 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"711b4fc2-6564-4370-8c37-3a7350b69e6b","Type":"ContainerStarted","Data":"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1"} Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.783054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerStarted","Data":"06ed2031af41e96b933cdc06a0f900122c1acdf91eabf0f922e8d24e1e161fed"} Jan 03 04:37:33 crc kubenswrapper[4865]: I0103 04:37:33.787177 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.7869565229999997 podStartE2EDuration="2.786956523s" podCreationTimestamp="2026-01-03 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:33.781711871 +0000 UTC m=+1280.898765056" watchObservedRunningTime="2026-01-03 04:37:33.786956523 +0000 UTC m=+1280.904009708" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.118570 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.133872 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.133954 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.134023 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.134086 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.134117 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.134180 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm2k2\" (UniqueName: \"kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.134204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle\") pod \"ecdb1d03-2f79-474a-a420-69d683351240\" (UID: \"ecdb1d03-2f79-474a-a420-69d683351240\") " Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.135090 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.135423 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.140925 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2" (OuterVolumeSpecName: "kube-api-access-lm2k2") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "kube-api-access-lm2k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.150883 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts" (OuterVolumeSpecName: "scripts") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.162940 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.162924605 podStartE2EDuration="3.162924605s" podCreationTimestamp="2026-01-03 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:33.807812416 +0000 UTC m=+1280.924865611" watchObservedRunningTime="2026-01-03 04:37:34.162924605 +0000 UTC m=+1281.279977790" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.182796 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.235600 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.235630 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.235640 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm2k2\" (UniqueName: \"kubernetes.io/projected/ecdb1d03-2f79-474a-a420-69d683351240-kube-api-access-lm2k2\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.235651 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.235660 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecdb1d03-2f79-474a-a420-69d683351240-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.238230 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.336316 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.367897 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data" (OuterVolumeSpecName: "config-data") pod "ecdb1d03-2f79-474a-a420-69d683351240" (UID: "ecdb1d03-2f79-474a-a420-69d683351240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.437482 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecdb1d03-2f79-474a-a420-69d683351240-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.800143 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecdb1d03-2f79-474a-a420-69d683351240","Type":"ContainerDied","Data":"ceef78671e87413a9f2aba33eee7487f0bd2bed72dbb68e8573284bb8d688929"} Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.800210 4865 scope.go:117] "RemoveContainer" containerID="252b0b9da1213b561e3d4a97786619216a806e9eb63c7c756e0cbc2141a06093" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.800395 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.815372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerStarted","Data":"87d87b7344f88804b5c5e8ab9006db3dc372917975e43b234da716b9b50177a9"} Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.839645 4865 scope.go:117] "RemoveContainer" containerID="af63e41a4cf78979d44eb4d2bf375fd491cbe0fc52cc057711d66aa381c1b922" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.843149 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.84313694 podStartE2EDuration="3.84313694s" podCreationTimestamp="2026-01-03 04:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:34.836466641 +0000 UTC m=+1281.953519816" watchObservedRunningTime="2026-01-03 04:37:34.84313694 +0000 UTC m=+1281.960190125" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.865722 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.874039 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.884366 4865 scope.go:117] "RemoveContainer" containerID="984df135619470d8f883e0c3fd5e174aab8ad0c948074fabf2392b9e582aaec9" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.888518 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:34 crc kubenswrapper[4865]: E0103 04:37:34.889085 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-central-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889111 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-central-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: E0103 04:37:34.889123 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="sg-core" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889131 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="sg-core" Jan 03 04:37:34 crc kubenswrapper[4865]: E0103 04:37:34.889200 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-notification-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889210 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-notification-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: E0103 04:37:34.889249 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="proxy-httpd" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889259 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="proxy-httpd" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889536 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="sg-core" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889588 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-central-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889600 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="ceilometer-notification-agent" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.889614 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdb1d03-2f79-474a-a420-69d683351240" containerName="proxy-httpd" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.892348 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.901830 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.904366 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.904589 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.904730 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.911631 4865 scope.go:117] "RemoveContainer" containerID="922e1a99cf62121b5c9cba4f55e20d0cd64238d995a35e69647babd5463eaf49" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.945838 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.945933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.945964 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.946000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4nn2\" (UniqueName: \"kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.946033 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.946069 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.946083 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:34 crc kubenswrapper[4865]: I0103 04:37:34.946104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.050764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.050853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.050921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.051004 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.051171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.051245 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.051339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4nn2\" (UniqueName: \"kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.051471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.053458 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.053550 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.058376 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.059515 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.059865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.061545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.074834 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.075888 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4nn2\" (UniqueName: \"kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2\") pod \"ceilometer-0\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.174545 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecdb1d03-2f79-474a-a420-69d683351240" path="/var/lib/kubelet/pods/ecdb1d03-2f79-474a-a420-69d683351240/volumes" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.213192 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.705742 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:35 crc kubenswrapper[4865]: I0103 04:37:35.824905 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerStarted","Data":"0601eceabdc95bcfd2fedf5b04200d7325e94f019221dee795777ea30d3447f7"} Jan 03 04:37:36 crc kubenswrapper[4865]: I0103 04:37:36.843315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerStarted","Data":"fdb7f2a83a54303bb7424b8f365ba7d5a43deb8b7a865003ff6c89ffe4e5d780"} Jan 03 04:37:37 crc kubenswrapper[4865]: I0103 04:37:37.032977 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 04:37:37 crc kubenswrapper[4865]: I0103 04:37:37.033024 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 04:37:37 crc kubenswrapper[4865]: I0103 04:37:37.103267 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 04:37:37 crc kubenswrapper[4865]: I0103 04:37:37.852457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerStarted","Data":"91911604bc0488cabe4c91bbfd460422ded6a5b779ddc6fb706c7553fdfcd70c"} Jan 03 04:37:38 crc kubenswrapper[4865]: I0103 04:37:38.051595 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:37:38 crc kubenswrapper[4865]: I0103 04:37:38.051589 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:37:38 crc kubenswrapper[4865]: I0103 04:37:38.863542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerStarted","Data":"309ea54fbfb107db8353712d53745be40651eab037ec88fb0041bf5b918f8b02"} Jan 03 04:37:40 crc kubenswrapper[4865]: I0103 04:37:40.886437 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerStarted","Data":"54d955c78823463e2b83ab022d4a1743e00439a60c99dcc62a414ebf346ec6e5"} Jan 03 04:37:40 crc kubenswrapper[4865]: I0103 04:37:40.888443 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:37:40 crc kubenswrapper[4865]: I0103 04:37:40.932838 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.456004735 podStartE2EDuration="6.932810351s" podCreationTimestamp="2026-01-03 04:37:34 +0000 UTC" firstStartedPulling="2026-01-03 04:37:35.715082709 +0000 UTC m=+1282.832135904" lastFinishedPulling="2026-01-03 04:37:40.191888335 +0000 UTC m=+1287.308941520" observedRunningTime="2026-01-03 04:37:40.912531322 +0000 UTC m=+1288.029584547" watchObservedRunningTime="2026-01-03 04:37:40.932810351 +0000 UTC m=+1288.049863576" Jan 03 04:37:41 crc kubenswrapper[4865]: I0103 04:37:41.064895 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.091605 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.091893 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.103296 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.149655 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.152483 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 04:37:42 crc kubenswrapper[4865]: I0103 04:37:42.936751 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 04:37:43 crc kubenswrapper[4865]: I0103 04:37:43.174590 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 04:37:43 crc kubenswrapper[4865]: I0103 04:37:43.174876 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 04:37:47 crc kubenswrapper[4865]: I0103 04:37:47.043461 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 04:37:47 crc kubenswrapper[4865]: I0103 04:37:47.046664 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 04:37:47 crc kubenswrapper[4865]: I0103 04:37:47.068673 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 04:37:47 crc kubenswrapper[4865]: I0103 04:37:47.996032 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.914755 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.997682 4865 generic.go:334] "Generic (PLEG): container finished" podID="43f7af3a-c204-41f8-a121-b9ab298fffa8" containerID="40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6" exitCode=137 Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.997788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43f7af3a-c204-41f8-a121-b9ab298fffa8","Type":"ContainerDied","Data":"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6"} Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.997848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43f7af3a-c204-41f8-a121-b9ab298fffa8","Type":"ContainerDied","Data":"03fa87a0028cf169cb3afe816fdf7ea1dc1fd8541a281baaabc2ed02079f01b4"} Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.997875 4865 scope.go:117] "RemoveContainer" containerID="40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6" Jan 03 04:37:49 crc kubenswrapper[4865]: I0103 04:37:49.997874 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.021746 4865 scope.go:117] "RemoveContainer" containerID="40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6" Jan 03 04:37:50 crc kubenswrapper[4865]: E0103 04:37:50.022327 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6\": container with ID starting with 40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6 not found: ID does not exist" containerID="40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.022406 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6"} err="failed to get container status \"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6\": rpc error: code = NotFound desc = could not find container \"40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6\": container with ID starting with 40928774db3d0cdea8b148c0b6e5b4f9a4cbf54309f962d35559f3ce5de1e0f6 not found: ID does not exist" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.044891 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data\") pod \"43f7af3a-c204-41f8-a121-b9ab298fffa8\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.045332 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle\") pod \"43f7af3a-c204-41f8-a121-b9ab298fffa8\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.046673 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhsn6\" (UniqueName: \"kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6\") pod \"43f7af3a-c204-41f8-a121-b9ab298fffa8\" (UID: \"43f7af3a-c204-41f8-a121-b9ab298fffa8\") " Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.052277 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6" (OuterVolumeSpecName: "kube-api-access-rhsn6") pod "43f7af3a-c204-41f8-a121-b9ab298fffa8" (UID: "43f7af3a-c204-41f8-a121-b9ab298fffa8"). InnerVolumeSpecName "kube-api-access-rhsn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.086123 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f7af3a-c204-41f8-a121-b9ab298fffa8" (UID: "43f7af3a-c204-41f8-a121-b9ab298fffa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.095143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data" (OuterVolumeSpecName: "config-data") pod "43f7af3a-c204-41f8-a121-b9ab298fffa8" (UID: "43f7af3a-c204-41f8-a121-b9ab298fffa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.149295 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhsn6\" (UniqueName: \"kubernetes.io/projected/43f7af3a-c204-41f8-a121-b9ab298fffa8-kube-api-access-rhsn6\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.149554 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.149619 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f7af3a-c204-41f8-a121-b9ab298fffa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.344089 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.369321 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.381416 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:50 crc kubenswrapper[4865]: E0103 04:37:50.381905 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f7af3a-c204-41f8-a121-b9ab298fffa8" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.381928 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f7af3a-c204-41f8-a121-b9ab298fffa8" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.382149 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f7af3a-c204-41f8-a121-b9ab298fffa8" containerName="nova-cell1-novncproxy-novncproxy" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.382906 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.385776 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.386094 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.386287 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.405141 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.454855 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.454906 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwk6n\" (UniqueName: \"kubernetes.io/projected/02276ef3-b599-4ad0-be9e-690430084e13-kube-api-access-kwk6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.454938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.455024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.455047 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.557576 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.557910 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.558129 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.558221 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwk6n\" (UniqueName: \"kubernetes.io/projected/02276ef3-b599-4ad0-be9e-690430084e13-kube-api-access-kwk6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.558301 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.563103 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.563607 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.564505 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.565044 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02276ef3-b599-4ad0-be9e-690430084e13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.593784 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwk6n\" (UniqueName: \"kubernetes.io/projected/02276ef3-b599-4ad0-be9e-690430084e13-kube-api-access-kwk6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"02276ef3-b599-4ad0-be9e-690430084e13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:50 crc kubenswrapper[4865]: I0103 04:37:50.714423 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:51 crc kubenswrapper[4865]: I0103 04:37:51.174461 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f7af3a-c204-41f8-a121-b9ab298fffa8" path="/var/lib/kubelet/pods/43f7af3a-c204-41f8-a121-b9ab298fffa8/volumes" Jan 03 04:37:51 crc kubenswrapper[4865]: I0103 04:37:51.251239 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.037962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02276ef3-b599-4ad0-be9e-690430084e13","Type":"ContainerStarted","Data":"50dc50be45f123cd37c371825b918b5aa0903ce5f492df14d18a547fc69092de"} Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.038229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"02276ef3-b599-4ad0-be9e-690430084e13","Type":"ContainerStarted","Data":"e9d0e9bdf3728fc4ef16161263dfe3e84e2643dc14359bfcedac669d670b89c4"} Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.069647 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.069626263 podStartE2EDuration="2.069626263s" podCreationTimestamp="2026-01-03 04:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:52.061838619 +0000 UTC m=+1299.178891824" watchObservedRunningTime="2026-01-03 04:37:52.069626263 +0000 UTC m=+1299.186679458" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.095700 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.096066 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.096224 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.096271 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.099756 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.100270 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.335069 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.337012 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.350188 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402481 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402584 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402621 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402684 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbqj\" (UniqueName: \"kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402722 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.402745 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504390 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504452 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504519 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjbqj\" (UniqueName: \"kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504579 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.504675 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.505330 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.505329 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.505617 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.507155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.507188 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.524240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjbqj\" (UniqueName: \"kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj\") pod \"dnsmasq-dns-59cf4bdb65-xx7tw\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:52 crc kubenswrapper[4865]: I0103 04:37:52.668789 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:53 crc kubenswrapper[4865]: W0103 04:37:53.177401 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf729f87f_62c3_4ca8_9f48_700d3dd15ac0.slice/crio-eaaad1e4c898863d52b95830c1e3a0a9bf4f6ccb3f58d7a9bbbae4d566a18d51 WatchSource:0}: Error finding container eaaad1e4c898863d52b95830c1e3a0a9bf4f6ccb3f58d7a9bbbae4d566a18d51: Status 404 returned error can't find the container with id eaaad1e4c898863d52b95830c1e3a0a9bf4f6ccb3f58d7a9bbbae4d566a18d51 Jan 03 04:37:53 crc kubenswrapper[4865]: I0103 04:37:53.183675 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.055093 4865 generic.go:334] "Generic (PLEG): container finished" podID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerID="f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e" exitCode=0 Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.055188 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" event={"ID":"f729f87f-62c3-4ca8-9f48-700d3dd15ac0","Type":"ContainerDied","Data":"f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e"} Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.055233 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" event={"ID":"f729f87f-62c3-4ca8-9f48-700d3dd15ac0","Type":"ContainerStarted","Data":"eaaad1e4c898863d52b95830c1e3a0a9bf4f6ccb3f58d7a9bbbae4d566a18d51"} Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.203499 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.204078 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-central-agent" containerID="cri-o://fdb7f2a83a54303bb7424b8f365ba7d5a43deb8b7a865003ff6c89ffe4e5d780" gracePeriod=30 Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.204185 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="sg-core" containerID="cri-o://309ea54fbfb107db8353712d53745be40651eab037ec88fb0041bf5b918f8b02" gracePeriod=30 Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.204219 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-notification-agent" containerID="cri-o://91911604bc0488cabe4c91bbfd460422ded6a5b779ddc6fb706c7553fdfcd70c" gracePeriod=30 Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.204162 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="proxy-httpd" containerID="cri-o://54d955c78823463e2b83ab022d4a1743e00439a60c99dcc62a414ebf346ec6e5" gracePeriod=30 Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.221476 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Jan 03 04:37:54 crc kubenswrapper[4865]: I0103 04:37:54.660628 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.065850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" event={"ID":"f729f87f-62c3-4ca8-9f48-700d3dd15ac0","Type":"ContainerStarted","Data":"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a"} Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.066125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069104 4865 generic.go:334] "Generic (PLEG): container finished" podID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerID="54d955c78823463e2b83ab022d4a1743e00439a60c99dcc62a414ebf346ec6e5" exitCode=0 Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069133 4865 generic.go:334] "Generic (PLEG): container finished" podID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerID="309ea54fbfb107db8353712d53745be40651eab037ec88fb0041bf5b918f8b02" exitCode=2 Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069141 4865 generic.go:334] "Generic (PLEG): container finished" podID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerID="fdb7f2a83a54303bb7424b8f365ba7d5a43deb8b7a865003ff6c89ffe4e5d780" exitCode=0 Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerDied","Data":"54d955c78823463e2b83ab022d4a1743e00439a60c99dcc62a414ebf346ec6e5"} Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069243 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerDied","Data":"309ea54fbfb107db8353712d53745be40651eab037ec88fb0041bf5b918f8b02"} Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerDied","Data":"fdb7f2a83a54303bb7424b8f365ba7d5a43deb8b7a865003ff6c89ffe4e5d780"} Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069333 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-log" containerID="cri-o://06ed2031af41e96b933cdc06a0f900122c1acdf91eabf0f922e8d24e1e161fed" gracePeriod=30 Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.069519 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-api" containerID="cri-o://87d87b7344f88804b5c5e8ab9006db3dc372917975e43b234da716b9b50177a9" gracePeriod=30 Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.094232 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" podStartSLOduration=3.094210449 podStartE2EDuration="3.094210449s" podCreationTimestamp="2026-01-03 04:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:37:55.082494068 +0000 UTC m=+1302.199547263" watchObservedRunningTime="2026-01-03 04:37:55.094210449 +0000 UTC m=+1302.211263654" Jan 03 04:37:55 crc kubenswrapper[4865]: I0103 04:37:55.715327 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:37:56 crc kubenswrapper[4865]: I0103 04:37:56.083175 4865 generic.go:334] "Generic (PLEG): container finished" podID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerID="06ed2031af41e96b933cdc06a0f900122c1acdf91eabf0f922e8d24e1e161fed" exitCode=143 Jan 03 04:37:56 crc kubenswrapper[4865]: I0103 04:37:56.084307 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerDied","Data":"06ed2031af41e96b933cdc06a0f900122c1acdf91eabf0f922e8d24e1e161fed"} Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.128047 4865 generic.go:334] "Generic (PLEG): container finished" podID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerID="91911604bc0488cabe4c91bbfd460422ded6a5b779ddc6fb706c7553fdfcd70c" exitCode=0 Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.128085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerDied","Data":"91911604bc0488cabe4c91bbfd460422ded6a5b779ddc6fb706c7553fdfcd70c"} Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.436649 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.562373 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.562537 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4nn2\" (UniqueName: \"kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.562558 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563321 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563443 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563480 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563537 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563581 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml\") pod \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\" (UID: \"7934867b-b7e6-4a4f-adc1-90f592ba58ff\") " Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.563961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.564198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.564242 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.576579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts" (OuterVolumeSpecName: "scripts") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.576603 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2" (OuterVolumeSpecName: "kube-api-access-m4nn2") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "kube-api-access-m4nn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.614291 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.625101 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.658761 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.660472 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data" (OuterVolumeSpecName: "config-data") pod "7934867b-b7e6-4a4f-adc1-90f592ba58ff" (UID: "7934867b-b7e6-4a4f-adc1-90f592ba58ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665623 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7934867b-b7e6-4a4f-adc1-90f592ba58ff-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665685 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665699 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665709 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665721 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4nn2\" (UniqueName: \"kubernetes.io/projected/7934867b-b7e6-4a4f-adc1-90f592ba58ff-kube-api-access-m4nn2\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665732 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:58 crc kubenswrapper[4865]: I0103 04:37:58.665740 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7934867b-b7e6-4a4f-adc1-90f592ba58ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.150134 4865 generic.go:334] "Generic (PLEG): container finished" podID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerID="87d87b7344f88804b5c5e8ab9006db3dc372917975e43b234da716b9b50177a9" exitCode=0 Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.150223 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerDied","Data":"87d87b7344f88804b5c5e8ab9006db3dc372917975e43b234da716b9b50177a9"} Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.154230 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7934867b-b7e6-4a4f-adc1-90f592ba58ff","Type":"ContainerDied","Data":"0601eceabdc95bcfd2fedf5b04200d7325e94f019221dee795777ea30d3447f7"} Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.154288 4865 scope.go:117] "RemoveContainer" containerID="54d955c78823463e2b83ab022d4a1743e00439a60c99dcc62a414ebf346ec6e5" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.154423 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.191826 4865 scope.go:117] "RemoveContainer" containerID="309ea54fbfb107db8353712d53745be40651eab037ec88fb0041bf5b918f8b02" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.222192 4865 scope.go:117] "RemoveContainer" containerID="91911604bc0488cabe4c91bbfd460422ded6a5b779ddc6fb706c7553fdfcd70c" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.229793 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.244818 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.255546 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:59 crc kubenswrapper[4865]: E0103 04:37:59.256022 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="proxy-httpd" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256040 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="proxy-httpd" Jan 03 04:37:59 crc kubenswrapper[4865]: E0103 04:37:59.256064 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-central-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256071 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-central-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: E0103 04:37:59.256091 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="sg-core" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256098 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="sg-core" Jan 03 04:37:59 crc kubenswrapper[4865]: E0103 04:37:59.256123 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-notification-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256131 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-notification-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256338 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="sg-core" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256368 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-notification-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256407 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="ceilometer-central-agent" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.256429 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" containerName="proxy-httpd" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.258299 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.260196 4865 scope.go:117] "RemoveContainer" containerID="fdb7f2a83a54303bb7424b8f365ba7d5a43deb8b7a865003ff6c89ffe4e5d780" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.260760 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.261214 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.261342 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.266010 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.380394 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-run-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.380937 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.381228 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.381265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-scripts\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.381735 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.381839 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8lc7\" (UniqueName: \"kubernetes.io/projected/209f508b-63d1-4413-95a8-8e539aaaa606-kube-api-access-m8lc7\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.382053 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-config-data\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.382186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-log-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.429219 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483528 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-config-data\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483584 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-log-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-run-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483724 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-scripts\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483752 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.483775 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8lc7\" (UniqueName: \"kubernetes.io/projected/209f508b-63d1-4413-95a8-8e539aaaa606-kube-api-access-m8lc7\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.484264 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-log-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.484369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/209f508b-63d1-4413-95a8-8e539aaaa606-run-httpd\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.489995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-scripts\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.490156 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.491055 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-config-data\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.491822 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.492924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/209f508b-63d1-4413-95a8-8e539aaaa606-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.501845 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8lc7\" (UniqueName: \"kubernetes.io/projected/209f508b-63d1-4413-95a8-8e539aaaa606-kube-api-access-m8lc7\") pod \"ceilometer-0\" (UID: \"209f508b-63d1-4413-95a8-8e539aaaa606\") " pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.584864 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data\") pod \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.584975 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq2kp\" (UniqueName: \"kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp\") pod \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.585022 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs\") pod \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.585122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle\") pod \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\" (UID: \"b8ef04f6-9d9d-49cf-b2de-f890e43b8035\") " Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.585722 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs" (OuterVolumeSpecName: "logs") pod "b8ef04f6-9d9d-49cf-b2de-f890e43b8035" (UID: "b8ef04f6-9d9d-49cf-b2de-f890e43b8035"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.589532 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp" (OuterVolumeSpecName: "kube-api-access-cq2kp") pod "b8ef04f6-9d9d-49cf-b2de-f890e43b8035" (UID: "b8ef04f6-9d9d-49cf-b2de-f890e43b8035"). InnerVolumeSpecName "kube-api-access-cq2kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.591010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.608189 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ef04f6-9d9d-49cf-b2de-f890e43b8035" (UID: "b8ef04f6-9d9d-49cf-b2de-f890e43b8035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.611530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data" (OuterVolumeSpecName: "config-data") pod "b8ef04f6-9d9d-49cf-b2de-f890e43b8035" (UID: "b8ef04f6-9d9d-49cf-b2de-f890e43b8035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.689007 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq2kp\" (UniqueName: \"kubernetes.io/projected/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-kube-api-access-cq2kp\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.689058 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.689072 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:37:59 crc kubenswrapper[4865]: I0103 04:37:59.689083 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ef04f6-9d9d-49cf-b2de-f890e43b8035-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:00 crc kubenswrapper[4865]: W0103 04:38:00.023840 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod209f508b_63d1_4413_95a8_8e539aaaa606.slice/crio-d406fe0ba7840d4256cded80c4cfa3a69e169ad52e92d571f9a1a05a883d39fb WatchSource:0}: Error finding container d406fe0ba7840d4256cded80c4cfa3a69e169ad52e92d571f9a1a05a883d39fb: Status 404 returned error can't find the container with id d406fe0ba7840d4256cded80c4cfa3a69e169ad52e92d571f9a1a05a883d39fb Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.032559 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.166473 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"209f508b-63d1-4413-95a8-8e539aaaa606","Type":"ContainerStarted","Data":"d406fe0ba7840d4256cded80c4cfa3a69e169ad52e92d571f9a1a05a883d39fb"} Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.169309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8ef04f6-9d9d-49cf-b2de-f890e43b8035","Type":"ContainerDied","Data":"5d868cecfd77231618ec08dd87c488a419ec1fe5c22db8078cfce8e33313f571"} Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.169361 4865 scope.go:117] "RemoveContainer" containerID="87d87b7344f88804b5c5e8ab9006db3dc372917975e43b234da716b9b50177a9" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.169537 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.202755 4865 scope.go:117] "RemoveContainer" containerID="06ed2031af41e96b933cdc06a0f900122c1acdf91eabf0f922e8d24e1e161fed" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.210445 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.217478 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.240288 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:00 crc kubenswrapper[4865]: E0103 04:38:00.240641 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-api" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.240661 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-api" Jan 03 04:38:00 crc kubenswrapper[4865]: E0103 04:38:00.240674 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-log" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.240682 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-log" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.240881 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-log" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.240900 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" containerName="nova-api-api" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.241752 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.244407 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.251634 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.251746 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.256182 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.400809 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.401131 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.401156 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.401191 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcdv\" (UniqueName: \"kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.401298 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.401423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503146 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503207 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txcdv\" (UniqueName: \"kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503302 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.503997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.511045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.511710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.512369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.513257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.521491 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txcdv\" (UniqueName: \"kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv\") pod \"nova-api-0\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.605592 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.715114 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:38:00 crc kubenswrapper[4865]: I0103 04:38:00.744981 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.097978 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:01 crc kubenswrapper[4865]: W0103 04:38:01.104177 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff8f1667_ce95_4926_abe3_838cf9883676.slice/crio-952446e89ba7e4c6225d00ccd99984f1183d97ee59cf36786bcd02f8a32e0fae WatchSource:0}: Error finding container 952446e89ba7e4c6225d00ccd99984f1183d97ee59cf36786bcd02f8a32e0fae: Status 404 returned error can't find the container with id 952446e89ba7e4c6225d00ccd99984f1183d97ee59cf36786bcd02f8a32e0fae Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.168853 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7934867b-b7e6-4a4f-adc1-90f592ba58ff" path="/var/lib/kubelet/pods/7934867b-b7e6-4a4f-adc1-90f592ba58ff/volumes" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.171727 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ef04f6-9d9d-49cf-b2de-f890e43b8035" path="/var/lib/kubelet/pods/b8ef04f6-9d9d-49cf-b2de-f890e43b8035/volumes" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.182304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerStarted","Data":"952446e89ba7e4c6225d00ccd99984f1183d97ee59cf36786bcd02f8a32e0fae"} Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.184373 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"209f508b-63d1-4413-95a8-8e539aaaa606","Type":"ContainerStarted","Data":"d64605dfd45236bc63ad130565519e6609e6e52089891afbf518d9965177fad1"} Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.204317 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.382081 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xjjnb"] Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.387646 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.392302 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.392506 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.412893 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjjnb"] Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.521147 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kswf8\" (UniqueName: \"kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.521239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.521289 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.521309 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.623327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kswf8\" (UniqueName: \"kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.623873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.623982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.624065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.629019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.630797 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.635506 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.648323 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kswf8\" (UniqueName: \"kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8\") pod \"nova-cell1-cell-mapping-xjjnb\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:01 crc kubenswrapper[4865]: I0103 04:38:01.760987 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.198765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerStarted","Data":"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5"} Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.199357 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerStarted","Data":"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027"} Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.208814 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"209f508b-63d1-4413-95a8-8e539aaaa606","Type":"ContainerStarted","Data":"a3fe1bd5ac4b12bbf1fac951c9ba02210117582302b95bc0010d18c7e376b5a4"} Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.246351 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.246328476 podStartE2EDuration="2.246328476s" podCreationTimestamp="2026-01-03 04:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:38:02.236854 +0000 UTC m=+1309.353907245" watchObservedRunningTime="2026-01-03 04:38:02.246328476 +0000 UTC m=+1309.363381671" Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.261664 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjjnb"] Jan 03 04:38:02 crc kubenswrapper[4865]: W0103 04:38:02.262253 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7635026_837e_4427_943e_d5de8b29c273.slice/crio-78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa WatchSource:0}: Error finding container 78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa: Status 404 returned error can't find the container with id 78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.670602 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.785240 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:38:02 crc kubenswrapper[4865]: I0103 04:38:02.785499 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="dnsmasq-dns" containerID="cri-o://0950196f014869fb67e25efedf5e727c44313dee654a2e85aec6ec41d5cd85a0" gracePeriod=10 Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.223728 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjjnb" event={"ID":"d7635026-837e-4427-943e-d5de8b29c273","Type":"ContainerStarted","Data":"76b1e0861f69b912ea16e944a2124ef9f50fbdee27b35b1225e8182d9ae1c70b"} Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.223992 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjjnb" event={"ID":"d7635026-837e-4427-943e-d5de8b29c273","Type":"ContainerStarted","Data":"78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa"} Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.227467 4865 generic.go:334] "Generic (PLEG): container finished" podID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerID="0950196f014869fb67e25efedf5e727c44313dee654a2e85aec6ec41d5cd85a0" exitCode=0 Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.227523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" event={"ID":"39ff2f3d-2289-4caf-bc71-622ec16c0038","Type":"ContainerDied","Data":"0950196f014869fb67e25efedf5e727c44313dee654a2e85aec6ec41d5cd85a0"} Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.229791 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"209f508b-63d1-4413-95a8-8e539aaaa606","Type":"ContainerStarted","Data":"cff0ebe8b247ac5ceba3563ecfda4b02e88ffceb6c0a5c3673be977eda9c5fdb"} Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.240731 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xjjnb" podStartSLOduration=2.240715557 podStartE2EDuration="2.240715557s" podCreationTimestamp="2026-01-03 04:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:38:03.236298238 +0000 UTC m=+1310.353351423" watchObservedRunningTime="2026-01-03 04:38:03.240715557 +0000 UTC m=+1310.357768742" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.334951 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.361815 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.362606 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scm48\" (UniqueName: \"kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.362706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.362737 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.362757 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.362812 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config\") pod \"39ff2f3d-2289-4caf-bc71-622ec16c0038\" (UID: \"39ff2f3d-2289-4caf-bc71-622ec16c0038\") " Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.379035 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48" (OuterVolumeSpecName: "kube-api-access-scm48") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "kube-api-access-scm48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.413324 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.420426 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.432900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config" (OuterVolumeSpecName: "config") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.442861 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.453059 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39ff2f3d-2289-4caf-bc71-622ec16c0038" (UID: "39ff2f3d-2289-4caf-bc71-622ec16c0038"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465018 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465056 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scm48\" (UniqueName: \"kubernetes.io/projected/39ff2f3d-2289-4caf-bc71-622ec16c0038-kube-api-access-scm48\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465069 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465079 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465087 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:03 crc kubenswrapper[4865]: I0103 04:38:03.465098 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ff2f3d-2289-4caf-bc71-622ec16c0038-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.239872 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"209f508b-63d1-4413-95a8-8e539aaaa606","Type":"ContainerStarted","Data":"f545977d5ccc62a03d75dafda4b68efcaa1e4d210defc387b6811b5c1f837306"} Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.240263 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.241790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" event={"ID":"39ff2f3d-2289-4caf-bc71-622ec16c0038","Type":"ContainerDied","Data":"0656a8411bcf78962cbd77c0e6096ce1af8d4697351536e31d158d92b6c46732"} Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.241839 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-chpxx" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.241845 4865 scope.go:117] "RemoveContainer" containerID="0950196f014869fb67e25efedf5e727c44313dee654a2e85aec6ec41d5cd85a0" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.273239 4865 scope.go:117] "RemoveContainer" containerID="bcf45f0f3aae3336a2cb4aa084ded5083b0bc4275e3302a5bb20ba338fa5b385" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.280090 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.600923221 podStartE2EDuration="5.280073152s" podCreationTimestamp="2026-01-03 04:37:59 +0000 UTC" firstStartedPulling="2026-01-03 04:38:00.02713153 +0000 UTC m=+1307.144184725" lastFinishedPulling="2026-01-03 04:38:03.706281471 +0000 UTC m=+1310.823334656" observedRunningTime="2026-01-03 04:38:04.276839634 +0000 UTC m=+1311.393892829" watchObservedRunningTime="2026-01-03 04:38:04.280073152 +0000 UTC m=+1311.397126337" Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.358490 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:38:04 crc kubenswrapper[4865]: I0103 04:38:04.368114 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-chpxx"] Jan 03 04:38:05 crc kubenswrapper[4865]: I0103 04:38:05.167945 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" path="/var/lib/kubelet/pods/39ff2f3d-2289-4caf-bc71-622ec16c0038/volumes" Jan 03 04:38:07 crc kubenswrapper[4865]: I0103 04:38:07.274753 4865 generic.go:334] "Generic (PLEG): container finished" podID="d7635026-837e-4427-943e-d5de8b29c273" containerID="76b1e0861f69b912ea16e944a2124ef9f50fbdee27b35b1225e8182d9ae1c70b" exitCode=0 Jan 03 04:38:07 crc kubenswrapper[4865]: I0103 04:38:07.274802 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjjnb" event={"ID":"d7635026-837e-4427-943e-d5de8b29c273","Type":"ContainerDied","Data":"76b1e0861f69b912ea16e944a2124ef9f50fbdee27b35b1225e8182d9ae1c70b"} Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.666956 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.769550 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kswf8\" (UniqueName: \"kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8\") pod \"d7635026-837e-4427-943e-d5de8b29c273\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.769590 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data\") pod \"d7635026-837e-4427-943e-d5de8b29c273\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.769635 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts\") pod \"d7635026-837e-4427-943e-d5de8b29c273\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.769722 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle\") pod \"d7635026-837e-4427-943e-d5de8b29c273\" (UID: \"d7635026-837e-4427-943e-d5de8b29c273\") " Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.785874 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts" (OuterVolumeSpecName: "scripts") pod "d7635026-837e-4427-943e-d5de8b29c273" (UID: "d7635026-837e-4427-943e-d5de8b29c273"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.788197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8" (OuterVolumeSpecName: "kube-api-access-kswf8") pod "d7635026-837e-4427-943e-d5de8b29c273" (UID: "d7635026-837e-4427-943e-d5de8b29c273"). InnerVolumeSpecName "kube-api-access-kswf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.826775 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7635026-837e-4427-943e-d5de8b29c273" (UID: "d7635026-837e-4427-943e-d5de8b29c273"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.829728 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data" (OuterVolumeSpecName: "config-data") pod "d7635026-837e-4427-943e-d5de8b29c273" (UID: "d7635026-837e-4427-943e-d5de8b29c273"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.871250 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kswf8\" (UniqueName: \"kubernetes.io/projected/d7635026-837e-4427-943e-d5de8b29c273-kube-api-access-kswf8\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.871555 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.871566 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:08 crc kubenswrapper[4865]: I0103 04:38:08.871575 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7635026-837e-4427-943e-d5de8b29c273-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.300807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjjnb" event={"ID":"d7635026-837e-4427-943e-d5de8b29c273","Type":"ContainerDied","Data":"78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa"} Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.300871 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d5bf948a79a797eafa8d2b69528e88ad2919c8f277361aedd682e57d1d50fa" Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.300891 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjjnb" Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.531007 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.531336 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-log" containerID="cri-o://dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" gracePeriod=30 Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.531455 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-api" containerID="cri-o://799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" gracePeriod=30 Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.548695 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.548985 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerName="nova-scheduler-scheduler" containerID="cri-o://a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" gracePeriod=30 Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.577558 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.577828 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" containerID="cri-o://062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31" gracePeriod=30 Jan 03 04:38:09 crc kubenswrapper[4865]: I0103 04:38:09.577935 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" containerID="cri-o://dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe" gracePeriod=30 Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.200244 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301143 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301239 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301296 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301357 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301400 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301453 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txcdv\" (UniqueName: \"kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv\") pod \"ff8f1667-ce95-4926-abe3-838cf9883676\" (UID: \"ff8f1667-ce95-4926-abe3-838cf9883676\") " Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301639 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs" (OuterVolumeSpecName: "logs") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.301910 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff8f1667-ce95-4926-abe3-838cf9883676-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.306758 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv" (OuterVolumeSpecName: "kube-api-access-txcdv") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "kube-api-access-txcdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317351 4865 generic.go:334] "Generic (PLEG): container finished" podID="ff8f1667-ce95-4926-abe3-838cf9883676" containerID="799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" exitCode=0 Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317415 4865 generic.go:334] "Generic (PLEG): container finished" podID="ff8f1667-ce95-4926-abe3-838cf9883676" containerID="dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" exitCode=143 Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317480 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerDied","Data":"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5"} Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerDied","Data":"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027"} Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317545 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff8f1667-ce95-4926-abe3-838cf9883676","Type":"ContainerDied","Data":"952446e89ba7e4c6225d00ccd99984f1183d97ee59cf36786bcd02f8a32e0fae"} Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317560 4865 scope.go:117] "RemoveContainer" containerID="799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.317699 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.329976 4865 generic.go:334] "Generic (PLEG): container finished" podID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerID="062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31" exitCode=143 Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.330005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerDied","Data":"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31"} Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.342691 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data" (OuterVolumeSpecName: "config-data") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.345949 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.354843 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.355050 4865 scope.go:117] "RemoveContainer" containerID="dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.371744 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff8f1667-ce95-4926-abe3-838cf9883676" (UID: "ff8f1667-ce95-4926-abe3-838cf9883676"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.371807 4865 scope.go:117] "RemoveContainer" containerID="799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.372293 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5\": container with ID starting with 799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5 not found: ID does not exist" containerID="799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.372328 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5"} err="failed to get container status \"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5\": rpc error: code = NotFound desc = could not find container \"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5\": container with ID starting with 799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5 not found: ID does not exist" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.372350 4865 scope.go:117] "RemoveContainer" containerID="dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.373020 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027\": container with ID starting with dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027 not found: ID does not exist" containerID="dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.373048 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027"} err="failed to get container status \"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027\": rpc error: code = NotFound desc = could not find container \"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027\": container with ID starting with dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027 not found: ID does not exist" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.373065 4865 scope.go:117] "RemoveContainer" containerID="799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.373311 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5"} err="failed to get container status \"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5\": rpc error: code = NotFound desc = could not find container \"799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5\": container with ID starting with 799e65d2d4eb57d462e1c8b89d258edcf47db8dbc83f2758b942364f12288ae5 not found: ID does not exist" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.373329 4865 scope.go:117] "RemoveContainer" containerID="dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.373498 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027"} err="failed to get container status \"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027\": rpc error: code = NotFound desc = could not find container \"dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027\": container with ID starting with dacd0bf672aeef3f3e7b0834ff27668fe4349af1f32b5a456b1105f230e6a027 not found: ID does not exist" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.403184 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.403220 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.403230 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.403241 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff8f1667-ce95-4926-abe3-838cf9883676-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.403253 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txcdv\" (UniqueName: \"kubernetes.io/projected/ff8f1667-ce95-4926-abe3-838cf9883676-kube-api-access-txcdv\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.646700 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.658091 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696052 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.696481 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="init" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696498 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="init" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.696509 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-log" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696516 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-log" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.696529 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="dnsmasq-dns" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696534 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="dnsmasq-dns" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.696541 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-api" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696546 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-api" Jan 03 04:38:10 crc kubenswrapper[4865]: E0103 04:38:10.696552 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7635026-837e-4427-943e-d5de8b29c273" containerName="nova-manage" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696558 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7635026-837e-4427-943e-d5de8b29c273" containerName="nova-manage" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696727 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-api" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696752 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ff2f3d-2289-4caf-bc71-622ec16c0038" containerName="dnsmasq-dns" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696761 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" containerName="nova-api-log" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.696772 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7635026-837e-4427-943e-d5de8b29c273" containerName="nova-manage" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.697766 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.705168 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.715999 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.716211 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.716358 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.818195 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.818798 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/673e4191-53ee-4b5d-8bbe-289693bab15d-logs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.818902 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpx22\" (UniqueName: \"kubernetes.io/projected/673e4191-53ee-4b5d-8bbe-289693bab15d-kube-api-access-xpx22\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.818993 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-config-data\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.819167 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.819255 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-public-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921129 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-public-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921359 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921444 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/673e4191-53ee-4b5d-8bbe-289693bab15d-logs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921502 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpx22\" (UniqueName: \"kubernetes.io/projected/673e4191-53ee-4b5d-8bbe-289693bab15d-kube-api-access-xpx22\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.921550 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-config-data\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.922588 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/673e4191-53ee-4b5d-8bbe-289693bab15d-logs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.926187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-public-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.926437 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-config-data\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.927311 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.935284 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673e4191-53ee-4b5d-8bbe-289693bab15d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:10 crc kubenswrapper[4865]: I0103 04:38:10.944468 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpx22\" (UniqueName: \"kubernetes.io/projected/673e4191-53ee-4b5d-8bbe-289693bab15d-kube-api-access-xpx22\") pod \"nova-api-0\" (UID: \"673e4191-53ee-4b5d-8bbe-289693bab15d\") " pod="openstack/nova-api-0" Jan 03 04:38:11 crc kubenswrapper[4865]: I0103 04:38:11.033118 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 03 04:38:11 crc kubenswrapper[4865]: I0103 04:38:11.191979 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8f1667-ce95-4926-abe3-838cf9883676" path="/var/lib/kubelet/pods/ff8f1667-ce95-4926-abe3-838cf9883676/volumes" Jan 03 04:38:11 crc kubenswrapper[4865]: I0103 04:38:11.499284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 03 04:38:11 crc kubenswrapper[4865]: W0103 04:38:11.506209 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod673e4191_53ee_4b5d_8bbe_289693bab15d.slice/crio-d2bb17046ae2c146ea0dbec63ea58437cb35c833fcd2adbea1ee52dd2e19842e WatchSource:0}: Error finding container d2bb17046ae2c146ea0dbec63ea58437cb35c833fcd2adbea1ee52dd2e19842e: Status 404 returned error can't find the container with id d2bb17046ae2c146ea0dbec63ea58437cb35c833fcd2adbea1ee52dd2e19842e Jan 03 04:38:12 crc kubenswrapper[4865]: E0103 04:38:12.364573 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 is running failed: container process not found" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:38:12 crc kubenswrapper[4865]: E0103 04:38:12.365844 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 is running failed: container process not found" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:38:12 crc kubenswrapper[4865]: E0103 04:38:12.376240 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 is running failed: container process not found" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 03 04:38:12 crc kubenswrapper[4865]: E0103 04:38:12.376470 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerName="nova-scheduler-scheduler" Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.384844 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"673e4191-53ee-4b5d-8bbe-289693bab15d","Type":"ContainerStarted","Data":"d2bb17046ae2c146ea0dbec63ea58437cb35c833fcd2adbea1ee52dd2e19842e"} Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.650921 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:57764->10.217.0.193:8775: read: connection reset by peer" Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.651095 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:57762->10.217.0.193:8775: read: connection reset by peer" Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.826931 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.957568 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxzvx\" (UniqueName: \"kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx\") pod \"711b4fc2-6564-4370-8c37-3a7350b69e6b\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.957670 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data\") pod \"711b4fc2-6564-4370-8c37-3a7350b69e6b\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.957720 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle\") pod \"711b4fc2-6564-4370-8c37-3a7350b69e6b\" (UID: \"711b4fc2-6564-4370-8c37-3a7350b69e6b\") " Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.981034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx" (OuterVolumeSpecName: "kube-api-access-fxzvx") pod "711b4fc2-6564-4370-8c37-3a7350b69e6b" (UID: "711b4fc2-6564-4370-8c37-3a7350b69e6b"). InnerVolumeSpecName "kube-api-access-fxzvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:12 crc kubenswrapper[4865]: I0103 04:38:12.988202 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data" (OuterVolumeSpecName: "config-data") pod "711b4fc2-6564-4370-8c37-3a7350b69e6b" (UID: "711b4fc2-6564-4370-8c37-3a7350b69e6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.024469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "711b4fc2-6564-4370-8c37-3a7350b69e6b" (UID: "711b4fc2-6564-4370-8c37-3a7350b69e6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.059483 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxzvx\" (UniqueName: \"kubernetes.io/projected/711b4fc2-6564-4370-8c37-3a7350b69e6b-kube-api-access-fxzvx\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.059510 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.059519 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711b4fc2-6564-4370-8c37-3a7350b69e6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.115604 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.165680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptm2d\" (UniqueName: \"kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d\") pod \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.165767 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle\") pod \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.165849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data\") pod \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.165889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs\") pod \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.166042 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs\") pod \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\" (UID: \"ab8d9e87-a1c6-45bf-a9dc-24201830e28a\") " Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.172313 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d" (OuterVolumeSpecName: "kube-api-access-ptm2d") pod "ab8d9e87-a1c6-45bf-a9dc-24201830e28a" (UID: "ab8d9e87-a1c6-45bf-a9dc-24201830e28a"). InnerVolumeSpecName "kube-api-access-ptm2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.174085 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs" (OuterVolumeSpecName: "logs") pod "ab8d9e87-a1c6-45bf-a9dc-24201830e28a" (UID: "ab8d9e87-a1c6-45bf-a9dc-24201830e28a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.222135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data" (OuterVolumeSpecName: "config-data") pod "ab8d9e87-a1c6-45bf-a9dc-24201830e28a" (UID: "ab8d9e87-a1c6-45bf-a9dc-24201830e28a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.223616 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8d9e87-a1c6-45bf-a9dc-24201830e28a" (UID: "ab8d9e87-a1c6-45bf-a9dc-24201830e28a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.253947 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab8d9e87-a1c6-45bf-a9dc-24201830e28a" (UID: "ab8d9e87-a1c6-45bf-a9dc-24201830e28a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.269756 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptm2d\" (UniqueName: \"kubernetes.io/projected/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-kube-api-access-ptm2d\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.269791 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.269801 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.269813 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-logs\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.269823 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8d9e87-a1c6-45bf-a9dc-24201830e28a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.398543 4865 generic.go:334] "Generic (PLEG): container finished" podID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerID="dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe" exitCode=0 Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.398574 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.398595 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerDied","Data":"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.398861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab8d9e87-a1c6-45bf-a9dc-24201830e28a","Type":"ContainerDied","Data":"6e4a1448b0712448658041d5189905884f60dbc27c3b5339578eb27e4e1b73d9"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.398907 4865 scope.go:117] "RemoveContainer" containerID="dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.405860 4865 generic.go:334] "Generic (PLEG): container finished" podID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" exitCode=0 Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.405921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"711b4fc2-6564-4370-8c37-3a7350b69e6b","Type":"ContainerDied","Data":"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.405951 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"711b4fc2-6564-4370-8c37-3a7350b69e6b","Type":"ContainerDied","Data":"0fa59a05a5a7fd82383fe35c35cab70b0120290c73cd147b7871c34019ed8071"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.406028 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.408167 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"673e4191-53ee-4b5d-8bbe-289693bab15d","Type":"ContainerStarted","Data":"4a61288af7b09cb3a9bcc487bb6c8010ba40bccaa0aa935efe9523af912ebe2d"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.408226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"673e4191-53ee-4b5d-8bbe-289693bab15d","Type":"ContainerStarted","Data":"8578d7a37ec0657437a018eb93215356f73ad8d22e76301f2324e78a38a544a3"} Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.445493 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.445478388 podStartE2EDuration="3.445478388s" podCreationTimestamp="2026-01-03 04:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:38:13.438843989 +0000 UTC m=+1320.555897164" watchObservedRunningTime="2026-01-03 04:38:13.445478388 +0000 UTC m=+1320.562531563" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.453070 4865 scope.go:117] "RemoveContainer" containerID="062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.462826 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.483018 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.489597 4865 scope.go:117] "RemoveContainer" containerID="dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe" Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.490002 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe\": container with ID starting with dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe not found: ID does not exist" containerID="dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.490031 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe"} err="failed to get container status \"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe\": rpc error: code = NotFound desc = could not find container \"dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe\": container with ID starting with dff09d6987741d440e5b5c25b3482412cdf5679f080c62e43565256e72bb0dbe not found: ID does not exist" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.490050 4865 scope.go:117] "RemoveContainer" containerID="062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31" Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.490366 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31\": container with ID starting with 062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31 not found: ID does not exist" containerID="062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.490522 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31"} err="failed to get container status \"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31\": rpc error: code = NotFound desc = could not find container \"062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31\": container with ID starting with 062a9d26609c7f1cb2a8a13d7b83a6b8b7173fd01aff98b7e867709b8a207f31 not found: ID does not exist" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.490599 4865 scope.go:117] "RemoveContainer" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.494080 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.510262 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.512670 4865 scope.go:117] "RemoveContainer" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.513877 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1\": container with ID starting with a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 not found: ID does not exist" containerID="a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.513904 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1"} err="failed to get container status \"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1\": rpc error: code = NotFound desc = could not find container \"a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1\": container with ID starting with a481ab0ca97b7f031c39f006ad5edee2fd03701ce53b41f1b1f9c45992aebca1 not found: ID does not exist" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.517536 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.517971 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerName="nova-scheduler-scheduler" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.517993 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerName="nova-scheduler-scheduler" Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.518012 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.518028 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" Jan 03 04:38:13 crc kubenswrapper[4865]: E0103 04:38:13.518057 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.518064 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.518244 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-log" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.518254 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" containerName="nova-metadata-metadata" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.518266 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" containerName="nova-scheduler-scheduler" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.519263 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.521847 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.522063 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.524716 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.526233 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.528826 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.534960 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.542826 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.586423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.586675 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-logs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.586789 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbg26\" (UniqueName: \"kubernetes.io/projected/874d8744-eb6f-46f5-a6b7-35348b4f9359-kube-api-access-rbg26\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.586865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-config-data\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.586936 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.587012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7w9\" (UniqueName: \"kubernetes.io/projected/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-kube-api-access-rf7w9\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.587093 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.587211 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-config-data\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-logs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbg26\" (UniqueName: \"kubernetes.io/projected/874d8744-eb6f-46f5-a6b7-35348b4f9359-kube-api-access-rbg26\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689079 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-config-data\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689103 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689129 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf7w9\" (UniqueName: \"kubernetes.io/projected/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-kube-api-access-rf7w9\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689165 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689231 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-config-data\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.689274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.693484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-logs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.696476 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-config-data\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.697688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.698502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.700087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.701925 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874d8744-eb6f-46f5-a6b7-35348b4f9359-config-data\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.712647 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf7w9\" (UniqueName: \"kubernetes.io/projected/db9bcaee-003e-4fb7-b1b6-477c6583c4cc-kube-api-access-rf7w9\") pod \"nova-metadata-0\" (UID: \"db9bcaee-003e-4fb7-b1b6-477c6583c4cc\") " pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.723285 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbg26\" (UniqueName: \"kubernetes.io/projected/874d8744-eb6f-46f5-a6b7-35348b4f9359-kube-api-access-rbg26\") pod \"nova-scheduler-0\" (UID: \"874d8744-eb6f-46f5-a6b7-35348b4f9359\") " pod="openstack/nova-scheduler-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.845076 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 03 04:38:13 crc kubenswrapper[4865]: I0103 04:38:13.872617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 03 04:38:14 crc kubenswrapper[4865]: I0103 04:38:14.293427 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 03 04:38:14 crc kubenswrapper[4865]: W0103 04:38:14.298120 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb9bcaee_003e_4fb7_b1b6_477c6583c4cc.slice/crio-dec2f031247038ca66e0d955764e1402f19c03fe38c09c6d702f60115413f6bd WatchSource:0}: Error finding container dec2f031247038ca66e0d955764e1402f19c03fe38c09c6d702f60115413f6bd: Status 404 returned error can't find the container with id dec2f031247038ca66e0d955764e1402f19c03fe38c09c6d702f60115413f6bd Jan 03 04:38:14 crc kubenswrapper[4865]: I0103 04:38:14.407139 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 03 04:38:14 crc kubenswrapper[4865]: W0103 04:38:14.408113 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod874d8744_eb6f_46f5_a6b7_35348b4f9359.slice/crio-09ea7dbfc07ad0602c899eea0897b2556009e4eb46f92f92a31d930d06baeed2 WatchSource:0}: Error finding container 09ea7dbfc07ad0602c899eea0897b2556009e4eb46f92f92a31d930d06baeed2: Status 404 returned error can't find the container with id 09ea7dbfc07ad0602c899eea0897b2556009e4eb46f92f92a31d930d06baeed2 Jan 03 04:38:14 crc kubenswrapper[4865]: I0103 04:38:14.420226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"874d8744-eb6f-46f5-a6b7-35348b4f9359","Type":"ContainerStarted","Data":"09ea7dbfc07ad0602c899eea0897b2556009e4eb46f92f92a31d930d06baeed2"} Jan 03 04:38:14 crc kubenswrapper[4865]: I0103 04:38:14.423220 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db9bcaee-003e-4fb7-b1b6-477c6583c4cc","Type":"ContainerStarted","Data":"dec2f031247038ca66e0d955764e1402f19c03fe38c09c6d702f60115413f6bd"} Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.169227 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711b4fc2-6564-4370-8c37-3a7350b69e6b" path="/var/lib/kubelet/pods/711b4fc2-6564-4370-8c37-3a7350b69e6b/volumes" Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.170862 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8d9e87-a1c6-45bf-a9dc-24201830e28a" path="/var/lib/kubelet/pods/ab8d9e87-a1c6-45bf-a9dc-24201830e28a/volumes" Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.438740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"874d8744-eb6f-46f5-a6b7-35348b4f9359","Type":"ContainerStarted","Data":"2d659b82a12c639941b79d0fba307fd5acaaa5300bdbf3abfbfdcc3eac95406d"} Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.443254 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db9bcaee-003e-4fb7-b1b6-477c6583c4cc","Type":"ContainerStarted","Data":"f6451dbdac74932532dd747ca3584a2dece5397edbec3696801add968335b623"} Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.443279 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db9bcaee-003e-4fb7-b1b6-477c6583c4cc","Type":"ContainerStarted","Data":"15b524636a01fc608b6e95c6a4742a605f47efc91451c316832989761a3aff54"} Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.464922 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4649047570000002 podStartE2EDuration="2.464904757s" podCreationTimestamp="2026-01-03 04:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:38:15.457947839 +0000 UTC m=+1322.575001064" watchObservedRunningTime="2026-01-03 04:38:15.464904757 +0000 UTC m=+1322.581957942" Jan 03 04:38:15 crc kubenswrapper[4865]: I0103 04:38:15.499624 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.499598343 podStartE2EDuration="2.499598343s" podCreationTimestamp="2026-01-03 04:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:38:15.488128324 +0000 UTC m=+1322.605181559" watchObservedRunningTime="2026-01-03 04:38:15.499598343 +0000 UTC m=+1322.616651528" Jan 03 04:38:18 crc kubenswrapper[4865]: I0103 04:38:18.848784 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:38:18 crc kubenswrapper[4865]: I0103 04:38:18.849634 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 03 04:38:18 crc kubenswrapper[4865]: I0103 04:38:18.873719 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 03 04:38:21 crc kubenswrapper[4865]: I0103 04:38:21.034606 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:38:21 crc kubenswrapper[4865]: I0103 04:38:21.035788 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 03 04:38:22 crc kubenswrapper[4865]: I0103 04:38:22.043718 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="673e4191-53ee-4b5d-8bbe-289693bab15d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:38:22 crc kubenswrapper[4865]: I0103 04:38:22.043735 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="673e4191-53ee-4b5d-8bbe-289693bab15d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:38:23 crc kubenswrapper[4865]: I0103 04:38:23.849296 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 04:38:23 crc kubenswrapper[4865]: I0103 04:38:23.852588 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 03 04:38:23 crc kubenswrapper[4865]: I0103 04:38:23.873044 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 03 04:38:23 crc kubenswrapper[4865]: I0103 04:38:23.925154 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 03 04:38:24 crc kubenswrapper[4865]: I0103 04:38:24.611994 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 03 04:38:24 crc kubenswrapper[4865]: I0103 04:38:24.863712 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db9bcaee-003e-4fb7-b1b6-477c6583c4cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 03 04:38:24 crc kubenswrapper[4865]: I0103 04:38:24.863703 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db9bcaee-003e-4fb7-b1b6-477c6583c4cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 03 04:38:29 crc kubenswrapper[4865]: I0103 04:38:29.601024 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.042932 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.043777 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.047374 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.057755 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.653965 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 03 04:38:31 crc kubenswrapper[4865]: I0103 04:38:31.665707 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 03 04:38:33 crc kubenswrapper[4865]: I0103 04:38:33.856806 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 04:38:33 crc kubenswrapper[4865]: I0103 04:38:33.857274 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 03 04:38:33 crc kubenswrapper[4865]: I0103 04:38:33.862916 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 04:38:33 crc kubenswrapper[4865]: I0103 04:38:33.865055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 03 04:38:43 crc kubenswrapper[4865]: I0103 04:38:43.617568 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:44 crc kubenswrapper[4865]: I0103 04:38:44.702510 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:47 crc kubenswrapper[4865]: I0103 04:38:47.648755 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="rabbitmq" containerID="cri-o://16f9af045039134433d1c81cb67c98ecad80802609a88144980e30cc8a9f68cb" gracePeriod=604796 Jan 03 04:38:48 crc kubenswrapper[4865]: I0103 04:38:48.775631 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="rabbitmq" containerID="cri-o://a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32" gracePeriod=604796 Jan 03 04:38:54 crc kubenswrapper[4865]: I0103 04:38:54.895942 4865 generic.go:334] "Generic (PLEG): container finished" podID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerID="16f9af045039134433d1c81cb67c98ecad80802609a88144980e30cc8a9f68cb" exitCode=0 Jan 03 04:38:54 crc kubenswrapper[4865]: I0103 04:38:54.896039 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerDied","Data":"16f9af045039134433d1c81cb67c98ecad80802609a88144980e30cc8a9f68cb"} Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.224673 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.368417 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.398922 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.398975 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4bv9\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399021 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399065 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399089 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399132 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399219 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399242 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399319 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399341 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.399360 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret\") pod \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\" (UID: \"d3d1e308-7d01-4224-9cc0-a5ed59256c80\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.400623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.400715 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.405204 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.417769 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.417781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.417772 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.418060 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9" (OuterVolumeSpecName: "kube-api-access-f4bv9") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "kube-api-access-f4bv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.459553 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info" (OuterVolumeSpecName: "pod-info") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.459977 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data" (OuterVolumeSpecName: "config-data") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.503847 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.503887 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.503915 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9622l\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504763 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504815 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504847 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504947 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504973 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.504995 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505015 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505051 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret\") pod \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\" (UID: \"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41\") " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505724 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505742 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505753 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3d1e308-7d01-4224-9cc0-a5ed59256c80-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505762 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505772 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4bv9\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-kube-api-access-f4bv9\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505782 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505813 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505831 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.505840 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3d1e308-7d01-4224-9cc0-a5ed59256c80-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.507323 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.507910 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.508216 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.511132 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf" (OuterVolumeSpecName: "server-conf") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.524550 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l" (OuterVolumeSpecName: "kube-api-access-9622l") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "kube-api-access-9622l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.527316 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info" (OuterVolumeSpecName: "pod-info") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.529355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.533855 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.539010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.558621 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.595931 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data" (OuterVolumeSpecName: "config-data") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607734 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607774 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607786 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607797 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607837 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607853 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607887 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607899 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607910 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607920 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9622l\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-kube-api-access-9622l\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.607930 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3d1e308-7d01-4224-9cc0-a5ed59256c80-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.617345 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf" (OuterVolumeSpecName: "server-conf") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.632130 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.636292 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d3d1e308-7d01-4224-9cc0-a5ed59256c80" (UID: "d3d1e308-7d01-4224-9cc0-a5ed59256c80"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.677202 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" (UID: "d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.710150 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.710192 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.710206 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41-server-conf\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.710217 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3d1e308-7d01-4224-9cc0-a5ed59256c80-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.906007 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerID="a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32" exitCode=0 Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.906044 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerDied","Data":"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32"} Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.906086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41","Type":"ContainerDied","Data":"aa435c949996a4f4d334a4d30973757d8b21906cf17326c6816c56a8bcc3f632"} Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.906106 4865 scope.go:117] "RemoveContainer" containerID="a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.906119 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.908768 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3d1e308-7d01-4224-9cc0-a5ed59256c80","Type":"ContainerDied","Data":"4c0b2b0175122e6030c390633bb3281dc42a8f9ec5f48757fe73bc85d93913df"} Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.908866 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.938324 4865 scope.go:117] "RemoveContainer" containerID="58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.944548 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.953563 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.964168 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.977479 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.980105 4865 scope.go:117] "RemoveContainer" containerID="a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32" Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.980757 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32\": container with ID starting with a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32 not found: ID does not exist" containerID="a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.980818 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32"} err="failed to get container status \"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32\": rpc error: code = NotFound desc = could not find container \"a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32\": container with ID starting with a5739134387e76cb39db6a4faea39fa7e3a0eee860762830cfd3ed6fb37a5e32 not found: ID does not exist" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.980854 4865 scope.go:117] "RemoveContainer" containerID="58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c" Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.981317 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c\": container with ID starting with 58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c not found: ID does not exist" containerID="58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.981357 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c"} err="failed to get container status \"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c\": rpc error: code = NotFound desc = could not find container \"58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c\": container with ID starting with 58ed4391c7ce5d2f4000af262e036311cb9c8d0f179d4d7d142e25bc899f1f1c not found: ID does not exist" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.981409 4865 scope.go:117] "RemoveContainer" containerID="16f9af045039134433d1c81cb67c98ecad80802609a88144980e30cc8a9f68cb" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987254 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.987655 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="setup-container" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987670 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="setup-container" Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.987685 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987692 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.987720 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="setup-container" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987726 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="setup-container" Jan 03 04:38:55 crc kubenswrapper[4865]: E0103 04:38:55.987734 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987740 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987908 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.987922 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" containerName="rabbitmq" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.988937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.992749 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993620 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993653 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993624 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993792 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2z5mc" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993887 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.993906 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.997067 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:55 crc kubenswrapper[4865]: I0103 04:38:55.999047 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.003837 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.003885 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.003925 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.003891 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.004000 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.003838 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.004102 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xqmxl" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.005300 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.020997 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.022666 4865 scope.go:117] "RemoveContainer" containerID="0f49289493175c6b66587e9f00a161681c04ea251dabfe0a77f61cbf5a9a8e38" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.116638 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.116921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.116944 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.116969 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.116995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117011 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117027 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x42n\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-kube-api-access-5x42n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117047 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117102 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117120 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117144 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7zz6\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-kube-api-access-m7zz6\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117176 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117321 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.117413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219050 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219128 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219173 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x42n\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-kube-api-access-5x42n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219205 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219223 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219245 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219284 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219300 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219328 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219351 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7zz6\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-kube-api-access-m7zz6\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219708 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219760 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219806 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219821 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219855 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219889 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.219942 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220632 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220655 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220662 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.220778 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.221082 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.221542 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.221814 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.224022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.224097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.225572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.225670 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.226620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.227167 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.227569 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.227660 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.228084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.230824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.238531 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7zz6\" (UniqueName: \"kubernetes.io/projected/3b26f2aa-ddac-4d96-b129-4738eee8fdb8-kube-api-access-m7zz6\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.238996 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x42n\" (UniqueName: \"kubernetes.io/projected/a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5-kube-api-access-5x42n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.259898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3b26f2aa-ddac-4d96-b129-4738eee8fdb8\") " pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.269981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.390415 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.421985 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.767886 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.770262 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.776023 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.794676 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840609 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840687 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840736 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840844 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840875 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.840984 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.943751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.944704 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.944949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.945084 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.945121 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.945276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.945327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.945354 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.946069 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.946087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.946280 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.946414 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.946702 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.947122 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:56 crc kubenswrapper[4865]: I0103 04:38:56.964275 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2\") pod \"dnsmasq-dns-67b789f86c-zgpgh\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.052419 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 03 04:38:57 crc kubenswrapper[4865]: W0103 04:38:57.053926 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b26f2aa_ddac_4d96_b129_4738eee8fdb8.slice/crio-1c6e0bb0a7767c36a2d11040980d7b2fbec117deb2b14eb39a3a5643f307b107 WatchSource:0}: Error finding container 1c6e0bb0a7767c36a2d11040980d7b2fbec117deb2b14eb39a3a5643f307b107: Status 404 returned error can't find the container with id 1c6e0bb0a7767c36a2d11040980d7b2fbec117deb2b14eb39a3a5643f307b107 Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.089553 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.176542 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41" path="/var/lib/kubelet/pods/d2a9772f-08d4-41e5-87cb-7a4c5d3c1b41/volumes" Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.177841 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d1e308-7d01-4224-9cc0-a5ed59256c80" path="/var/lib/kubelet/pods/d3d1e308-7d01-4224-9cc0-a5ed59256c80/volumes" Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.642283 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:38:57 crc kubenswrapper[4865]: W0103 04:38:57.667060 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26d2f1e2_a6f9_4e79_935a_4bed6d672b37.slice/crio-54aeebf3f5a12951ec85d291956c74c53eee9f7a73c6efa9900ac5c167d6f4ac WatchSource:0}: Error finding container 54aeebf3f5a12951ec85d291956c74c53eee9f7a73c6efa9900ac5c167d6f4ac: Status 404 returned error can't find the container with id 54aeebf3f5a12951ec85d291956c74c53eee9f7a73c6efa9900ac5c167d6f4ac Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.968799 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" event={"ID":"26d2f1e2-a6f9-4e79-935a-4bed6d672b37","Type":"ContainerStarted","Data":"54aeebf3f5a12951ec85d291956c74c53eee9f7a73c6efa9900ac5c167d6f4ac"} Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.970720 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b26f2aa-ddac-4d96-b129-4738eee8fdb8","Type":"ContainerStarted","Data":"1c6e0bb0a7767c36a2d11040980d7b2fbec117deb2b14eb39a3a5643f307b107"} Jan 03 04:38:57 crc kubenswrapper[4865]: I0103 04:38:57.971996 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5","Type":"ContainerStarted","Data":"69d1587f2de7a553ebdf0fda99fd204df26d4d9407326eea1d15ccebc974a4d9"} Jan 03 04:38:58 crc kubenswrapper[4865]: I0103 04:38:58.984393 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5","Type":"ContainerStarted","Data":"c845a4fafcc3af3b2c0524d959f64cb2b7e06f986275c79dc7679eae89151de3"} Jan 03 04:38:58 crc kubenswrapper[4865]: I0103 04:38:58.986838 4865 generic.go:334] "Generic (PLEG): container finished" podID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerID="7c85b62bafc60d935198deed5cdab9e33a99d33d83d9c5c03dd9a870012caa61" exitCode=0 Jan 03 04:38:58 crc kubenswrapper[4865]: I0103 04:38:58.986886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" event={"ID":"26d2f1e2-a6f9-4e79-935a-4bed6d672b37","Type":"ContainerDied","Data":"7c85b62bafc60d935198deed5cdab9e33a99d33d83d9c5c03dd9a870012caa61"} Jan 03 04:38:58 crc kubenswrapper[4865]: I0103 04:38:58.989457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b26f2aa-ddac-4d96-b129-4738eee8fdb8","Type":"ContainerStarted","Data":"9b9a6ec018408f626fb4a4f699cc9a4d526c84d07e508677a5df492bf9b08020"} Jan 03 04:39:00 crc kubenswrapper[4865]: I0103 04:39:00.001607 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" event={"ID":"26d2f1e2-a6f9-4e79-935a-4bed6d672b37","Type":"ContainerStarted","Data":"fcbc20d3e3d6d53a22b1e23c5b9ceeaf3915e7f72638e5279a49b88819518aa4"} Jan 03 04:39:00 crc kubenswrapper[4865]: I0103 04:39:00.026154 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" podStartSLOduration=4.026136207 podStartE2EDuration="4.026136207s" podCreationTimestamp="2026-01-03 04:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:39:00.023050274 +0000 UTC m=+1367.140103459" watchObservedRunningTime="2026-01-03 04:39:00.026136207 +0000 UTC m=+1367.143189392" Jan 03 04:39:01 crc kubenswrapper[4865]: I0103 04:39:01.012288 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.091507 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.207059 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.207424 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="dnsmasq-dns" containerID="cri-o://5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a" gracePeriod=10 Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.474127 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-kxf4v"] Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.481708 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.488043 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-kxf4v"] Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.664737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-config\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.665074 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.665100 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.665838 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbs2\" (UniqueName: \"kubernetes.io/projected/5a6c30d9-afcf-463b-a58f-dfc353d40686-kube-api-access-2xbs2\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.665886 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.666059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.666366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.724609 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.768829 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbs2\" (UniqueName: \"kubernetes.io/projected/5a6c30d9-afcf-463b-a58f-dfc353d40686-kube-api-access-2xbs2\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.768886 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.768932 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.769018 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.769054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-config\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.769098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.769119 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.770560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.770608 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.770763 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-config\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.771153 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.771598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.772493 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a6c30d9-afcf-463b-a58f-dfc353d40686-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.805442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbs2\" (UniqueName: \"kubernetes.io/projected/5a6c30d9-afcf-463b-a58f-dfc353d40686-kube-api-access-2xbs2\") pod \"dnsmasq-dns-cb6ffcf87-kxf4v\" (UID: \"5a6c30d9-afcf-463b-a58f-dfc353d40686\") " pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.820794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870059 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870115 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870249 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870305 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870345 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjbqj\" (UniqueName: \"kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.870675 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config\") pod \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\" (UID: \"f729f87f-62c3-4ca8-9f48-700d3dd15ac0\") " Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.895631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj" (OuterVolumeSpecName: "kube-api-access-hjbqj") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "kube-api-access-hjbqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.974165 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.974740 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.974769 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjbqj\" (UniqueName: \"kubernetes.io/projected/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-kube-api-access-hjbqj\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.978312 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.982713 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:07 crc kubenswrapper[4865]: I0103 04:39:07.985700 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config" (OuterVolumeSpecName: "config") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.005489 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f729f87f-62c3-4ca8-9f48-700d3dd15ac0" (UID: "f729f87f-62c3-4ca8-9f48-700d3dd15ac0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.076832 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.076864 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.076876 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.076884 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f729f87f-62c3-4ca8-9f48-700d3dd15ac0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.085541 4865 generic.go:334] "Generic (PLEG): container finished" podID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerID="5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a" exitCode=0 Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.085582 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.085590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" event={"ID":"f729f87f-62c3-4ca8-9f48-700d3dd15ac0","Type":"ContainerDied","Data":"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a"} Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.085628 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" event={"ID":"f729f87f-62c3-4ca8-9f48-700d3dd15ac0","Type":"ContainerDied","Data":"eaaad1e4c898863d52b95830c1e3a0a9bf4f6ccb3f58d7a9bbbae4d566a18d51"} Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.085664 4865 scope.go:117] "RemoveContainer" containerID="5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.108213 4865 scope.go:117] "RemoveContainer" containerID="f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.126181 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.138226 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-xx7tw"] Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.157483 4865 scope.go:117] "RemoveContainer" containerID="5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a" Jan 03 04:39:08 crc kubenswrapper[4865]: E0103 04:39:08.157846 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a\": container with ID starting with 5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a not found: ID does not exist" containerID="5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.157897 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a"} err="failed to get container status \"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a\": rpc error: code = NotFound desc = could not find container \"5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a\": container with ID starting with 5a4d5b26940f9371549ec05920980582a2840db30fed7d11a01c5d96a2747e3a not found: ID does not exist" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.157915 4865 scope.go:117] "RemoveContainer" containerID="f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e" Jan 03 04:39:08 crc kubenswrapper[4865]: E0103 04:39:08.158352 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e\": container with ID starting with f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e not found: ID does not exist" containerID="f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e" Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.158407 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e"} err="failed to get container status \"f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e\": rpc error: code = NotFound desc = could not find container \"f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e\": container with ID starting with f83bdd3a602a1e008ba90e6c89061bfdc05d033c459bf753ec865261d146df5e not found: ID does not exist" Jan 03 04:39:08 crc kubenswrapper[4865]: W0103 04:39:08.355522 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a6c30d9_afcf_463b_a58f_dfc353d40686.slice/crio-5366f728f492ca348a2853c997113347e42f7423bc86ae3a531faaecc76eca44 WatchSource:0}: Error finding container 5366f728f492ca348a2853c997113347e42f7423bc86ae3a531faaecc76eca44: Status 404 returned error can't find the container with id 5366f728f492ca348a2853c997113347e42f7423bc86ae3a531faaecc76eca44 Jan 03 04:39:08 crc kubenswrapper[4865]: I0103 04:39:08.356479 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-kxf4v"] Jan 03 04:39:09 crc kubenswrapper[4865]: I0103 04:39:09.096743 4865 generic.go:334] "Generic (PLEG): container finished" podID="5a6c30d9-afcf-463b-a58f-dfc353d40686" containerID="420e9bb059eee9b1acbacfc9565edd3280d281ddeb11578525b9e80ee83fe285" exitCode=0 Jan 03 04:39:09 crc kubenswrapper[4865]: I0103 04:39:09.096842 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" event={"ID":"5a6c30d9-afcf-463b-a58f-dfc353d40686","Type":"ContainerDied","Data":"420e9bb059eee9b1acbacfc9565edd3280d281ddeb11578525b9e80ee83fe285"} Jan 03 04:39:09 crc kubenswrapper[4865]: I0103 04:39:09.097060 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" event={"ID":"5a6c30d9-afcf-463b-a58f-dfc353d40686","Type":"ContainerStarted","Data":"5366f728f492ca348a2853c997113347e42f7423bc86ae3a531faaecc76eca44"} Jan 03 04:39:09 crc kubenswrapper[4865]: I0103 04:39:09.174525 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" path="/var/lib/kubelet/pods/f729f87f-62c3-4ca8-9f48-700d3dd15ac0/volumes" Jan 03 04:39:10 crc kubenswrapper[4865]: I0103 04:39:10.109762 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" event={"ID":"5a6c30d9-afcf-463b-a58f-dfc353d40686","Type":"ContainerStarted","Data":"c11ab7257adfc4087d0e674c5f9c26cc22f1a561f1f36e503dfcb97ac3e21c5d"} Jan 03 04:39:10 crc kubenswrapper[4865]: I0103 04:39:10.110037 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:10 crc kubenswrapper[4865]: I0103 04:39:10.144184 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" podStartSLOduration=3.14416119 podStartE2EDuration="3.14416119s" podCreationTimestamp="2026-01-03 04:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:39:10.131512499 +0000 UTC m=+1377.248565684" watchObservedRunningTime="2026-01-03 04:39:10.14416119 +0000 UTC m=+1377.261214395" Jan 03 04:39:12 crc kubenswrapper[4865]: I0103 04:39:12.670264 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-xx7tw" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: i/o timeout" Jan 03 04:39:17 crc kubenswrapper[4865]: I0103 04:39:17.823534 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-kxf4v" Jan 03 04:39:17 crc kubenswrapper[4865]: I0103 04:39:17.887080 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:39:17 crc kubenswrapper[4865]: I0103 04:39:17.887372 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="dnsmasq-dns" containerID="cri-o://fcbc20d3e3d6d53a22b1e23c5b9ceeaf3915e7f72638e5279a49b88819518aa4" gracePeriod=10 Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.199016 4865 generic.go:334] "Generic (PLEG): container finished" podID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerID="fcbc20d3e3d6d53a22b1e23c5b9ceeaf3915e7f72638e5279a49b88819518aa4" exitCode=0 Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.199175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" event={"ID":"26d2f1e2-a6f9-4e79-935a-4bed6d672b37","Type":"ContainerDied","Data":"fcbc20d3e3d6d53a22b1e23c5b9ceeaf3915e7f72638e5279a49b88819518aa4"} Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.380470 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.504902 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.504998 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.505063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.505096 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.505134 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.505150 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.505187 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb\") pod \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\" (UID: \"26d2f1e2-a6f9-4e79-935a-4bed6d672b37\") " Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.512844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2" (OuterVolumeSpecName: "kube-api-access-w6wl2") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "kube-api-access-w6wl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.552279 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.553626 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config" (OuterVolumeSpecName: "config") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.555741 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.556976 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.561034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.565510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "26d2f1e2-a6f9-4e79-935a-4bed6d672b37" (UID: "26d2f1e2-a6f9-4e79-935a-4bed6d672b37"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607637 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607681 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607694 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-config\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607707 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607720 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607731 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-kube-api-access-w6wl2\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:18 crc kubenswrapper[4865]: I0103 04:39:18.607747 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d2f1e2-a6f9-4e79-935a-4bed6d672b37-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.209023 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" event={"ID":"26d2f1e2-a6f9-4e79-935a-4bed6d672b37","Type":"ContainerDied","Data":"54aeebf3f5a12951ec85d291956c74c53eee9f7a73c6efa9900ac5c167d6f4ac"} Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.209290 4865 scope.go:117] "RemoveContainer" containerID="fcbc20d3e3d6d53a22b1e23c5b9ceeaf3915e7f72638e5279a49b88819518aa4" Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.209172 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-zgpgh" Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.236364 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.243787 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-zgpgh"] Jan 03 04:39:19 crc kubenswrapper[4865]: I0103 04:39:19.243995 4865 scope.go:117] "RemoveContainer" containerID="7c85b62bafc60d935198deed5cdab9e33a99d33d83d9c5c03dd9a870012caa61" Jan 03 04:39:21 crc kubenswrapper[4865]: I0103 04:39:21.172022 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" path="/var/lib/kubelet/pods/26d2f1e2-a6f9-4e79-935a-4bed6d672b37/volumes" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.209331 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g"] Jan 03 04:39:30 crc kubenswrapper[4865]: E0103 04:39:30.210240 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210256 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: E0103 04:39:30.210273 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="init" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210280 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="init" Jan 03 04:39:30 crc kubenswrapper[4865]: E0103 04:39:30.210300 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="init" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210308 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="init" Jan 03 04:39:30 crc kubenswrapper[4865]: E0103 04:39:30.210319 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210326 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210563 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d2f1e2-a6f9-4e79-935a-4bed6d672b37" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.210577 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f729f87f-62c3-4ca8-9f48-700d3dd15ac0" containerName="dnsmasq-dns" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.211247 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.213926 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.214029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.214608 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.214808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.223541 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g"] Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.346445 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.346815 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.346883 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstjf\" (UniqueName: \"kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.346983 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.448170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.448232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.448297 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstjf\" (UniqueName: \"kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.448366 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.455338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.456005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.460759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.468256 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstjf\" (UniqueName: \"kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:30 crc kubenswrapper[4865]: I0103 04:39:30.572162 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:32 crc kubenswrapper[4865]: I0103 04:39:32.552276 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g"] Jan 03 04:39:32 crc kubenswrapper[4865]: W0103 04:39:32.558490 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f26a495_d92f_42c6_9395_d4cb6e0037f5.slice/crio-93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b WatchSource:0}: Error finding container 93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b: Status 404 returned error can't find the container with id 93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b Jan 03 04:39:33 crc kubenswrapper[4865]: I0103 04:39:33.376169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" event={"ID":"5f26a495-d92f-42c6-9395-d4cb6e0037f5","Type":"ContainerStarted","Data":"93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b"} Jan 03 04:39:34 crc kubenswrapper[4865]: I0103 04:39:34.391366 4865 generic.go:334] "Generic (PLEG): container finished" podID="3b26f2aa-ddac-4d96-b129-4738eee8fdb8" containerID="9b9a6ec018408f626fb4a4f699cc9a4d526c84d07e508677a5df492bf9b08020" exitCode=0 Jan 03 04:39:34 crc kubenswrapper[4865]: I0103 04:39:34.391453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b26f2aa-ddac-4d96-b129-4738eee8fdb8","Type":"ContainerDied","Data":"9b9a6ec018408f626fb4a4f699cc9a4d526c84d07e508677a5df492bf9b08020"} Jan 03 04:39:35 crc kubenswrapper[4865]: I0103 04:39:35.407740 4865 generic.go:334] "Generic (PLEG): container finished" podID="a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5" containerID="c845a4fafcc3af3b2c0524d959f64cb2b7e06f986275c79dc7679eae89151de3" exitCode=0 Jan 03 04:39:35 crc kubenswrapper[4865]: I0103 04:39:35.407887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5","Type":"ContainerDied","Data":"c845a4fafcc3af3b2c0524d959f64cb2b7e06f986275c79dc7679eae89151de3"} Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.417946 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5","Type":"ContainerStarted","Data":"60bca007f6ecae30ea0c065db6580b8ab5718e7ccd4ddac4f2f93cb53ec86fe0"} Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.418474 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.420114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b26f2aa-ddac-4d96-b129-4738eee8fdb8","Type":"ContainerStarted","Data":"1a077cf6fad7c581288078d1997b109c121b8aaa83980eb87d01f8cf1d364ae7"} Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.420493 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.437430 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.437414637 podStartE2EDuration="41.437414637s" podCreationTimestamp="2026-01-03 04:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:39:36.435350951 +0000 UTC m=+1403.552404146" watchObservedRunningTime="2026-01-03 04:39:36.437414637 +0000 UTC m=+1403.554467822" Jan 03 04:39:36 crc kubenswrapper[4865]: I0103 04:39:36.468945 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.468923916 podStartE2EDuration="41.468923916s" podCreationTimestamp="2026-01-03 04:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 04:39:36.453275734 +0000 UTC m=+1403.570328919" watchObservedRunningTime="2026-01-03 04:39:36.468923916 +0000 UTC m=+1403.585977101" Jan 03 04:39:40 crc kubenswrapper[4865]: I0103 04:39:40.739555 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:39:40 crc kubenswrapper[4865]: I0103 04:39:40.740053 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:39:45 crc kubenswrapper[4865]: I0103 04:39:45.504463 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" event={"ID":"5f26a495-d92f-42c6-9395-d4cb6e0037f5","Type":"ContainerStarted","Data":"2fef4166bb028e4248c70acc3c116e2f9afeceec425524829ee2e4bfccd7f0ea"} Jan 03 04:39:45 crc kubenswrapper[4865]: I0103 04:39:45.539106 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" podStartSLOduration=3.376976575 podStartE2EDuration="15.539076819s" podCreationTimestamp="2026-01-03 04:39:30 +0000 UTC" firstStartedPulling="2026-01-03 04:39:32.561411362 +0000 UTC m=+1399.678464547" lastFinishedPulling="2026-01-03 04:39:44.723511606 +0000 UTC m=+1411.840564791" observedRunningTime="2026-01-03 04:39:45.527335263 +0000 UTC m=+1412.644388438" watchObservedRunningTime="2026-01-03 04:39:45.539076819 +0000 UTC m=+1412.656130044" Jan 03 04:39:46 crc kubenswrapper[4865]: I0103 04:39:46.394626 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 03 04:39:46 crc kubenswrapper[4865]: I0103 04:39:46.424611 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 03 04:39:56 crc kubenswrapper[4865]: I0103 04:39:56.628249 4865 generic.go:334] "Generic (PLEG): container finished" podID="5f26a495-d92f-42c6-9395-d4cb6e0037f5" containerID="2fef4166bb028e4248c70acc3c116e2f9afeceec425524829ee2e4bfccd7f0ea" exitCode=0 Jan 03 04:39:56 crc kubenswrapper[4865]: I0103 04:39:56.628487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" event={"ID":"5f26a495-d92f-42c6-9395-d4cb6e0037f5","Type":"ContainerDied","Data":"2fef4166bb028e4248c70acc3c116e2f9afeceec425524829ee2e4bfccd7f0ea"} Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.091219 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.194656 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key\") pod \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.194736 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory\") pod \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.194797 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle\") pod \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.195086 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstjf\" (UniqueName: \"kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf\") pod \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\" (UID: \"5f26a495-d92f-42c6-9395-d4cb6e0037f5\") " Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.201457 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf" (OuterVolumeSpecName: "kube-api-access-zstjf") pod "5f26a495-d92f-42c6-9395-d4cb6e0037f5" (UID: "5f26a495-d92f-42c6-9395-d4cb6e0037f5"). InnerVolumeSpecName "kube-api-access-zstjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.205431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5f26a495-d92f-42c6-9395-d4cb6e0037f5" (UID: "5f26a495-d92f-42c6-9395-d4cb6e0037f5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.229490 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory" (OuterVolumeSpecName: "inventory") pod "5f26a495-d92f-42c6-9395-d4cb6e0037f5" (UID: "5f26a495-d92f-42c6-9395-d4cb6e0037f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.238296 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f26a495-d92f-42c6-9395-d4cb6e0037f5" (UID: "5f26a495-d92f-42c6-9395-d4cb6e0037f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.297435 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstjf\" (UniqueName: \"kubernetes.io/projected/5f26a495-d92f-42c6-9395-d4cb6e0037f5-kube-api-access-zstjf\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.297481 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.297496 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.297509 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f26a495-d92f-42c6-9395-d4cb6e0037f5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.659758 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" event={"ID":"5f26a495-d92f-42c6-9395-d4cb6e0037f5","Type":"ContainerDied","Data":"93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b"} Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.659824 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93024a027a0f018c9318134c8f1cdbf4371b8d7ff19ee863a2a307b4ddf6a60b" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.659939 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.813584 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr"] Jan 03 04:39:58 crc kubenswrapper[4865]: E0103 04:39:58.814546 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f26a495-d92f-42c6-9395-d4cb6e0037f5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.814710 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f26a495-d92f-42c6-9395-d4cb6e0037f5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.815171 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f26a495-d92f-42c6-9395-d4cb6e0037f5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.816431 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.818882 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.819594 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.819806 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.820887 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.830703 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr"] Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.908806 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.909124 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:58 crc kubenswrapper[4865]: I0103 04:39:58.909350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfbq\" (UniqueName: \"kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.011525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.011619 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlfbq\" (UniqueName: \"kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.011763 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.018083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.020101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.034324 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlfbq\" (UniqueName: \"kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vdnjr\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.138298 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:39:59 crc kubenswrapper[4865]: I0103 04:39:59.710325 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr"] Jan 03 04:40:00 crc kubenswrapper[4865]: I0103 04:40:00.683589 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" event={"ID":"76548eb3-2e5a-4325-85c3-3dac91f58d9b","Type":"ContainerStarted","Data":"c3dd0c4f67dde739b11f884795eb65729decb70e404489a32704002a1e3dd006"} Jan 03 04:40:00 crc kubenswrapper[4865]: I0103 04:40:00.684060 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" event={"ID":"76548eb3-2e5a-4325-85c3-3dac91f58d9b","Type":"ContainerStarted","Data":"5ebbf59d889579d6e7ec2403922f327bd9e6ffb165e6f30b26dd05682672bbf2"} Jan 03 04:40:00 crc kubenswrapper[4865]: I0103 04:40:00.714686 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" podStartSLOduration=2.1459742410000002 podStartE2EDuration="2.714659538s" podCreationTimestamp="2026-01-03 04:39:58 +0000 UTC" firstStartedPulling="2026-01-03 04:39:59.714540456 +0000 UTC m=+1426.831593641" lastFinishedPulling="2026-01-03 04:40:00.283225743 +0000 UTC m=+1427.400278938" observedRunningTime="2026-01-03 04:40:00.698887572 +0000 UTC m=+1427.815940807" watchObservedRunningTime="2026-01-03 04:40:00.714659538 +0000 UTC m=+1427.831712753" Jan 03 04:40:03 crc kubenswrapper[4865]: I0103 04:40:03.726204 4865 generic.go:334] "Generic (PLEG): container finished" podID="76548eb3-2e5a-4325-85c3-3dac91f58d9b" containerID="c3dd0c4f67dde739b11f884795eb65729decb70e404489a32704002a1e3dd006" exitCode=0 Jan 03 04:40:03 crc kubenswrapper[4865]: I0103 04:40:03.726326 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" event={"ID":"76548eb3-2e5a-4325-85c3-3dac91f58d9b","Type":"ContainerDied","Data":"c3dd0c4f67dde739b11f884795eb65729decb70e404489a32704002a1e3dd006"} Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.259990 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.357647 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlfbq\" (UniqueName: \"kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq\") pod \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.357806 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory\") pod \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.357853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key\") pod \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\" (UID: \"76548eb3-2e5a-4325-85c3-3dac91f58d9b\") " Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.366753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq" (OuterVolumeSpecName: "kube-api-access-wlfbq") pod "76548eb3-2e5a-4325-85c3-3dac91f58d9b" (UID: "76548eb3-2e5a-4325-85c3-3dac91f58d9b"). InnerVolumeSpecName "kube-api-access-wlfbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.395183 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory" (OuterVolumeSpecName: "inventory") pod "76548eb3-2e5a-4325-85c3-3dac91f58d9b" (UID: "76548eb3-2e5a-4325-85c3-3dac91f58d9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.410187 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76548eb3-2e5a-4325-85c3-3dac91f58d9b" (UID: "76548eb3-2e5a-4325-85c3-3dac91f58d9b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.460282 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlfbq\" (UniqueName: \"kubernetes.io/projected/76548eb3-2e5a-4325-85c3-3dac91f58d9b-kube-api-access-wlfbq\") on node \"crc\" DevicePath \"\"" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.460323 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.460339 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76548eb3-2e5a-4325-85c3-3dac91f58d9b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.747595 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" event={"ID":"76548eb3-2e5a-4325-85c3-3dac91f58d9b","Type":"ContainerDied","Data":"5ebbf59d889579d6e7ec2403922f327bd9e6ffb165e6f30b26dd05682672bbf2"} Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.747636 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebbf59d889579d6e7ec2403922f327bd9e6ffb165e6f30b26dd05682672bbf2" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.747651 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vdnjr" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.830435 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v"] Jan 03 04:40:05 crc kubenswrapper[4865]: E0103 04:40:05.830824 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76548eb3-2e5a-4325-85c3-3dac91f58d9b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.830842 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="76548eb3-2e5a-4325-85c3-3dac91f58d9b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.832846 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="76548eb3-2e5a-4325-85c3-3dac91f58d9b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.833672 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.835351 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.835881 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.838639 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.838855 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.840657 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v"] Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.969483 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.969669 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.969733 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:05 crc kubenswrapper[4865]: I0103 04:40:05.969960 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.072268 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.072336 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.072399 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.072484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.076770 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.078829 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.080264 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.100145 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:06 crc kubenswrapper[4865]: I0103 04:40:06.149564 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:40:08 crc kubenswrapper[4865]: I0103 04:40:08.176734 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v"] Jan 03 04:40:08 crc kubenswrapper[4865]: I0103 04:40:08.783103 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" event={"ID":"32245d9a-04a2-4ee3-99ae-6c876313c5a1","Type":"ContainerStarted","Data":"3c0ac55e0ee23bcfc1a91dbb4a026330bf238a4c99556dc3e3c65361d87e2d26"} Jan 03 04:40:09 crc kubenswrapper[4865]: I0103 04:40:09.798523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" event={"ID":"32245d9a-04a2-4ee3-99ae-6c876313c5a1","Type":"ContainerStarted","Data":"1e20c03a1904f9646c7996476f6dd6def192401602507f650d464add2364e5df"} Jan 03 04:40:09 crc kubenswrapper[4865]: I0103 04:40:09.834448 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" podStartSLOduration=4.417827953 podStartE2EDuration="4.834409268s" podCreationTimestamp="2026-01-03 04:40:05 +0000 UTC" firstStartedPulling="2026-01-03 04:40:08.173842876 +0000 UTC m=+1435.290896101" lastFinishedPulling="2026-01-03 04:40:08.590424231 +0000 UTC m=+1435.707477416" observedRunningTime="2026-01-03 04:40:09.822811875 +0000 UTC m=+1436.939865080" watchObservedRunningTime="2026-01-03 04:40:09.834409268 +0000 UTC m=+1436.951462493" Jan 03 04:40:10 crc kubenswrapper[4865]: I0103 04:40:10.739601 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:40:10 crc kubenswrapper[4865]: I0103 04:40:10.740032 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:40:34 crc kubenswrapper[4865]: I0103 04:40:34.378407 4865 scope.go:117] "RemoveContainer" containerID="7a8fc72b6bacae784057f7d02352f91f39a4da8e3abcdf7177c855d118cf4311" Jan 03 04:40:34 crc kubenswrapper[4865]: I0103 04:40:34.408956 4865 scope.go:117] "RemoveContainer" containerID="09cdd10ee90e48a30782504c4df7c1f1f428bda24ab1e22bb6836c4ab23b8507" Jan 03 04:40:40 crc kubenswrapper[4865]: I0103 04:40:40.739725 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:40:40 crc kubenswrapper[4865]: I0103 04:40:40.740597 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:40:40 crc kubenswrapper[4865]: I0103 04:40:40.740677 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:40:40 crc kubenswrapper[4865]: I0103 04:40:40.741910 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:40:40 crc kubenswrapper[4865]: I0103 04:40:40.741994 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a" gracePeriod=600 Jan 03 04:40:41 crc kubenswrapper[4865]: I0103 04:40:41.138839 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a" exitCode=0 Jan 03 04:40:41 crc kubenswrapper[4865]: I0103 04:40:41.139040 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a"} Jan 03 04:40:41 crc kubenswrapper[4865]: I0103 04:40:41.139118 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e"} Jan 03 04:40:41 crc kubenswrapper[4865]: I0103 04:40:41.139138 4865 scope.go:117] "RemoveContainer" containerID="a82e68c06b39e809cdca2872b7c7b72d7a687416c8815b2c0f9636f63f6ab156" Jan 03 04:41:34 crc kubenswrapper[4865]: I0103 04:41:34.572300 4865 scope.go:117] "RemoveContainer" containerID="ae3e9565dd7f755e5cc02a7f2e55ca30f9fe8c91d4e7ec5bd169e56b3a8005ca" Jan 03 04:41:43 crc kubenswrapper[4865]: I0103 04:41:43.923924 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:43 crc kubenswrapper[4865]: I0103 04:41:43.926722 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:43 crc kubenswrapper[4865]: I0103 04:41:43.952326 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.055946 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.056047 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.056102 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqp2\" (UniqueName: \"kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.158928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.159034 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.159088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqp2\" (UniqueName: \"kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.159772 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.159800 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.183511 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqp2\" (UniqueName: \"kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2\") pod \"redhat-marketplace-djgwr\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.264955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:44 crc kubenswrapper[4865]: W0103 04:41:44.737802 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeea44cf7_7ca2_46b8_8bce_b384de5f9909.slice/crio-cb306184c56760fa8ab30120b7fe74128e926b650c7095e41393bd2f553c4d6b WatchSource:0}: Error finding container cb306184c56760fa8ab30120b7fe74128e926b650c7095e41393bd2f553c4d6b: Status 404 returned error can't find the container with id cb306184c56760fa8ab30120b7fe74128e926b650c7095e41393bd2f553c4d6b Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.741366 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:44 crc kubenswrapper[4865]: I0103 04:41:44.882788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerStarted","Data":"cb306184c56760fa8ab30120b7fe74128e926b650c7095e41393bd2f553c4d6b"} Jan 03 04:41:45 crc kubenswrapper[4865]: I0103 04:41:45.899288 4865 generic.go:334] "Generic (PLEG): container finished" podID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerID="c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690" exitCode=0 Jan 03 04:41:45 crc kubenswrapper[4865]: I0103 04:41:45.899364 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerDied","Data":"c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690"} Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.110578 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.113599 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.133927 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.254686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.255084 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.255181 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlbk\" (UniqueName: \"kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.356837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlbk\" (UniqueName: \"kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.356960 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.357008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.357580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.358138 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.391784 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlbk\" (UniqueName: \"kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk\") pod \"certified-operators-ztsm5\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.479106 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.931585 4865 generic.go:334] "Generic (PLEG): container finished" podID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerID="9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd" exitCode=0 Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.931638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerDied","Data":"9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd"} Jan 03 04:41:48 crc kubenswrapper[4865]: I0103 04:41:48.981545 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:41:49 crc kubenswrapper[4865]: I0103 04:41:49.942853 4865 generic.go:334] "Generic (PLEG): container finished" podID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerID="126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739" exitCode=0 Jan 03 04:41:49 crc kubenswrapper[4865]: I0103 04:41:49.942918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerDied","Data":"126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739"} Jan 03 04:41:49 crc kubenswrapper[4865]: I0103 04:41:49.943337 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerStarted","Data":"32ef56e525b59a9b4cbcabd2682b2c9aba0f40ef52be49e834d3247a4b4cea8c"} Jan 03 04:41:49 crc kubenswrapper[4865]: I0103 04:41:49.948504 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerStarted","Data":"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e"} Jan 03 04:41:49 crc kubenswrapper[4865]: I0103 04:41:49.997997 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-djgwr" podStartSLOduration=3.432902587 podStartE2EDuration="6.997969689s" podCreationTimestamp="2026-01-03 04:41:43 +0000 UTC" firstStartedPulling="2026-01-03 04:41:45.901849538 +0000 UTC m=+1533.018902763" lastFinishedPulling="2026-01-03 04:41:49.46691666 +0000 UTC m=+1536.583969865" observedRunningTime="2026-01-03 04:41:49.983242242 +0000 UTC m=+1537.100295477" watchObservedRunningTime="2026-01-03 04:41:49.997969689 +0000 UTC m=+1537.115022904" Jan 03 04:41:50 crc kubenswrapper[4865]: I0103 04:41:50.959339 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerStarted","Data":"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df"} Jan 03 04:41:52 crc kubenswrapper[4865]: I0103 04:41:52.983468 4865 generic.go:334] "Generic (PLEG): container finished" podID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerID="8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df" exitCode=0 Jan 03 04:41:52 crc kubenswrapper[4865]: I0103 04:41:52.983585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerDied","Data":"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df"} Jan 03 04:41:54 crc kubenswrapper[4865]: I0103 04:41:53.999569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerStarted","Data":"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332"} Jan 03 04:41:54 crc kubenswrapper[4865]: I0103 04:41:54.040478 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztsm5" podStartSLOduration=2.498929414 podStartE2EDuration="6.040454912s" podCreationTimestamp="2026-01-03 04:41:48 +0000 UTC" firstStartedPulling="2026-01-03 04:41:49.944887966 +0000 UTC m=+1537.061941171" lastFinishedPulling="2026-01-03 04:41:53.486413484 +0000 UTC m=+1540.603466669" observedRunningTime="2026-01-03 04:41:54.021722707 +0000 UTC m=+1541.138775922" watchObservedRunningTime="2026-01-03 04:41:54.040454912 +0000 UTC m=+1541.157508097" Jan 03 04:41:54 crc kubenswrapper[4865]: I0103 04:41:54.265467 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:54 crc kubenswrapper[4865]: I0103 04:41:54.265531 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:54 crc kubenswrapper[4865]: I0103 04:41:54.313078 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:55 crc kubenswrapper[4865]: I0103 04:41:55.067982 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:56 crc kubenswrapper[4865]: I0103 04:41:56.501236 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.028795 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-djgwr" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="registry-server" containerID="cri-o://939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e" gracePeriod=2 Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.601374 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.739761 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bqp2\" (UniqueName: \"kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2\") pod \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.739903 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content\") pod \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.740226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities\") pod \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\" (UID: \"eea44cf7-7ca2-46b8-8bce-b384de5f9909\") " Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.741207 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities" (OuterVolumeSpecName: "utilities") pod "eea44cf7-7ca2-46b8-8bce-b384de5f9909" (UID: "eea44cf7-7ca2-46b8-8bce-b384de5f9909"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.747853 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2" (OuterVolumeSpecName: "kube-api-access-6bqp2") pod "eea44cf7-7ca2-46b8-8bce-b384de5f9909" (UID: "eea44cf7-7ca2-46b8-8bce-b384de5f9909"). InnerVolumeSpecName "kube-api-access-6bqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.777456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eea44cf7-7ca2-46b8-8bce-b384de5f9909" (UID: "eea44cf7-7ca2-46b8-8bce-b384de5f9909"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.842097 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.842140 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea44cf7-7ca2-46b8-8bce-b384de5f9909-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:41:57 crc kubenswrapper[4865]: I0103 04:41:57.842156 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bqp2\" (UniqueName: \"kubernetes.io/projected/eea44cf7-7ca2-46b8-8bce-b384de5f9909-kube-api-access-6bqp2\") on node \"crc\" DevicePath \"\"" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.058929 4865 generic.go:334] "Generic (PLEG): container finished" podID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerID="939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e" exitCode=0 Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.059005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerDied","Data":"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e"} Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.059046 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-djgwr" event={"ID":"eea44cf7-7ca2-46b8-8bce-b384de5f9909","Type":"ContainerDied","Data":"cb306184c56760fa8ab30120b7fe74128e926b650c7095e41393bd2f553c4d6b"} Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.059069 4865 scope.go:117] "RemoveContainer" containerID="939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.059072 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-djgwr" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.084641 4865 scope.go:117] "RemoveContainer" containerID="9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.108804 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.111119 4865 scope.go:117] "RemoveContainer" containerID="c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.117150 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-djgwr"] Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.161739 4865 scope.go:117] "RemoveContainer" containerID="939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e" Jan 03 04:41:58 crc kubenswrapper[4865]: E0103 04:41:58.162184 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e\": container with ID starting with 939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e not found: ID does not exist" containerID="939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.162296 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e"} err="failed to get container status \"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e\": rpc error: code = NotFound desc = could not find container \"939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e\": container with ID starting with 939bd3f8f16a7fa92094dbe88da37aed69aeb143a7c848cc1aa5f532ef0c627e not found: ID does not exist" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.162404 4865 scope.go:117] "RemoveContainer" containerID="9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd" Jan 03 04:41:58 crc kubenswrapper[4865]: E0103 04:41:58.163210 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd\": container with ID starting with 9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd not found: ID does not exist" containerID="9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.163249 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd"} err="failed to get container status \"9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd\": rpc error: code = NotFound desc = could not find container \"9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd\": container with ID starting with 9619a7240a90c34aceab444555330c46d1df1da5eec0b8fc4ebcf88c491f6bcd not found: ID does not exist" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.163268 4865 scope.go:117] "RemoveContainer" containerID="c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690" Jan 03 04:41:58 crc kubenswrapper[4865]: E0103 04:41:58.163698 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690\": container with ID starting with c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690 not found: ID does not exist" containerID="c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.163750 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690"} err="failed to get container status \"c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690\": rpc error: code = NotFound desc = could not find container \"c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690\": container with ID starting with c67c645aa9305b6955b9ed46b3cc73709a0e23fdf4e9ca1e37a735b32a2ce690 not found: ID does not exist" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.480735 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.480996 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:58 crc kubenswrapper[4865]: I0103 04:41:58.563558 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:59 crc kubenswrapper[4865]: I0103 04:41:59.136485 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:41:59 crc kubenswrapper[4865]: I0103 04:41:59.168955 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" path="/var/lib/kubelet/pods/eea44cf7-7ca2-46b8-8bce-b384de5f9909/volumes" Jan 03 04:42:00 crc kubenswrapper[4865]: I0103 04:42:00.891354 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.096821 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztsm5" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="registry-server" containerID="cri-o://5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332" gracePeriod=2 Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.671980 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.835943 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srlbk\" (UniqueName: \"kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk\") pod \"f20612cc-6f17-49d6-a147-7e80a59bb607\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.836027 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities\") pod \"f20612cc-6f17-49d6-a147-7e80a59bb607\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.836154 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content\") pod \"f20612cc-6f17-49d6-a147-7e80a59bb607\" (UID: \"f20612cc-6f17-49d6-a147-7e80a59bb607\") " Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.837609 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities" (OuterVolumeSpecName: "utilities") pod "f20612cc-6f17-49d6-a147-7e80a59bb607" (UID: "f20612cc-6f17-49d6-a147-7e80a59bb607"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.842882 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk" (OuterVolumeSpecName: "kube-api-access-srlbk") pod "f20612cc-6f17-49d6-a147-7e80a59bb607" (UID: "f20612cc-6f17-49d6-a147-7e80a59bb607"). InnerVolumeSpecName "kube-api-access-srlbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.894757 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f20612cc-6f17-49d6-a147-7e80a59bb607" (UID: "f20612cc-6f17-49d6-a147-7e80a59bb607"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.939241 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srlbk\" (UniqueName: \"kubernetes.io/projected/f20612cc-6f17-49d6-a147-7e80a59bb607-kube-api-access-srlbk\") on node \"crc\" DevicePath \"\"" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.939281 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:42:01 crc kubenswrapper[4865]: I0103 04:42:01.939296 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20612cc-6f17-49d6-a147-7e80a59bb607-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.108043 4865 generic.go:334] "Generic (PLEG): container finished" podID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerID="5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332" exitCode=0 Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.108117 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztsm5" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.108124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerDied","Data":"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332"} Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.108903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztsm5" event={"ID":"f20612cc-6f17-49d6-a147-7e80a59bb607","Type":"ContainerDied","Data":"32ef56e525b59a9b4cbcabd2682b2c9aba0f40ef52be49e834d3247a4b4cea8c"} Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.108945 4865 scope.go:117] "RemoveContainer" containerID="5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.131916 4865 scope.go:117] "RemoveContainer" containerID="8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.147236 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.154833 4865 scope.go:117] "RemoveContainer" containerID="126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.157811 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztsm5"] Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.195177 4865 scope.go:117] "RemoveContainer" containerID="5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332" Jan 03 04:42:02 crc kubenswrapper[4865]: E0103 04:42:02.195688 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332\": container with ID starting with 5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332 not found: ID does not exist" containerID="5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.195735 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332"} err="failed to get container status \"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332\": rpc error: code = NotFound desc = could not find container \"5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332\": container with ID starting with 5c797908f46c5c4e9e4551823441983064ab644dd7e8015b7ad55fedbef88332 not found: ID does not exist" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.195766 4865 scope.go:117] "RemoveContainer" containerID="8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df" Jan 03 04:42:02 crc kubenswrapper[4865]: E0103 04:42:02.196099 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df\": container with ID starting with 8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df not found: ID does not exist" containerID="8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.196138 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df"} err="failed to get container status \"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df\": rpc error: code = NotFound desc = could not find container \"8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df\": container with ID starting with 8d0778b3457c58947c90694708135214a968c75e6fa9aa4d3e6b573c3dd079df not found: ID does not exist" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.196170 4865 scope.go:117] "RemoveContainer" containerID="126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739" Jan 03 04:42:02 crc kubenswrapper[4865]: E0103 04:42:02.196459 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739\": container with ID starting with 126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739 not found: ID does not exist" containerID="126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739" Jan 03 04:42:02 crc kubenswrapper[4865]: I0103 04:42:02.196507 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739"} err="failed to get container status \"126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739\": rpc error: code = NotFound desc = could not find container \"126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739\": container with ID starting with 126bc620628f8ef63c5d4ed68280b24fd77c226b8c9e383f776db02eafd16739 not found: ID does not exist" Jan 03 04:42:03 crc kubenswrapper[4865]: I0103 04:42:03.172192 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" path="/var/lib/kubelet/pods/f20612cc-6f17-49d6-a147-7e80a59bb607/volumes" Jan 03 04:43:10 crc kubenswrapper[4865]: I0103 04:43:10.740091 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:43:10 crc kubenswrapper[4865]: I0103 04:43:10.740868 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.916369 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917402 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="extract-content" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917421 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="extract-content" Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917459 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917466 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917475 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917481 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917494 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="extract-utilities" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917506 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="extract-utilities" Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917514 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="extract-utilities" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917522 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="extract-utilities" Jan 03 04:43:15 crc kubenswrapper[4865]: E0103 04:43:15.917534 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="extract-content" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917542 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="extract-content" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917718 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20612cc-6f17-49d6-a147-7e80a59bb607" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.917734 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea44cf7-7ca2-46b8-8bce-b384de5f9909" containerName="registry-server" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.919569 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.937138 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.993487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.994601 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dr6\" (UniqueName: \"kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:15 crc kubenswrapper[4865]: I0103 04:43:15.995422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.099616 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.099718 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.099774 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dr6\" (UniqueName: \"kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.100233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.100359 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.123325 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dr6\" (UniqueName: \"kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6\") pod \"community-operators-khkgr\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.239989 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.732062 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:16 crc kubenswrapper[4865]: I0103 04:43:16.881561 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerStarted","Data":"94a51338aeac8293708efd020be74163c91f1526083df2e2620e7cd2a3884c48"} Jan 03 04:43:17 crc kubenswrapper[4865]: I0103 04:43:17.892334 4865 generic.go:334] "Generic (PLEG): container finished" podID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerID="d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3" exitCode=0 Jan 03 04:43:17 crc kubenswrapper[4865]: I0103 04:43:17.892403 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerDied","Data":"d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3"} Jan 03 04:43:17 crc kubenswrapper[4865]: I0103 04:43:17.894272 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:43:18 crc kubenswrapper[4865]: I0103 04:43:18.902936 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerStarted","Data":"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7"} Jan 03 04:43:19 crc kubenswrapper[4865]: I0103 04:43:19.913676 4865 generic.go:334] "Generic (PLEG): container finished" podID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerID="3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7" exitCode=0 Jan 03 04:43:19 crc kubenswrapper[4865]: I0103 04:43:19.913790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerDied","Data":"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7"} Jan 03 04:43:20 crc kubenswrapper[4865]: I0103 04:43:20.931009 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerStarted","Data":"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f"} Jan 03 04:43:20 crc kubenswrapper[4865]: I0103 04:43:20.959770 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khkgr" podStartSLOduration=3.545394639 podStartE2EDuration="5.959742451s" podCreationTimestamp="2026-01-03 04:43:15 +0000 UTC" firstStartedPulling="2026-01-03 04:43:17.894060234 +0000 UTC m=+1625.011113419" lastFinishedPulling="2026-01-03 04:43:20.308408006 +0000 UTC m=+1627.425461231" observedRunningTime="2026-01-03 04:43:20.94821026 +0000 UTC m=+1628.065263495" watchObservedRunningTime="2026-01-03 04:43:20.959742451 +0000 UTC m=+1628.076795676" Jan 03 04:43:26 crc kubenswrapper[4865]: I0103 04:43:26.240991 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:26 crc kubenswrapper[4865]: I0103 04:43:26.241323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:26 crc kubenswrapper[4865]: I0103 04:43:26.322454 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:27 crc kubenswrapper[4865]: I0103 04:43:27.053476 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:27 crc kubenswrapper[4865]: I0103 04:43:27.110852 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:29 crc kubenswrapper[4865]: I0103 04:43:29.023471 4865 generic.go:334] "Generic (PLEG): container finished" podID="32245d9a-04a2-4ee3-99ae-6c876313c5a1" containerID="1e20c03a1904f9646c7996476f6dd6def192401602507f650d464add2364e5df" exitCode=0 Jan 03 04:43:29 crc kubenswrapper[4865]: I0103 04:43:29.023582 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" event={"ID":"32245d9a-04a2-4ee3-99ae-6c876313c5a1","Type":"ContainerDied","Data":"1e20c03a1904f9646c7996476f6dd6def192401602507f650d464add2364e5df"} Jan 03 04:43:29 crc kubenswrapper[4865]: I0103 04:43:29.025159 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khkgr" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="registry-server" containerID="cri-o://47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f" gracePeriod=2 Jan 03 04:43:29 crc kubenswrapper[4865]: I0103 04:43:29.992156 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.036036 4865 generic.go:334] "Generic (PLEG): container finished" podID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerID="47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f" exitCode=0 Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.036139 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khkgr" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.036163 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerDied","Data":"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f"} Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.036237 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khkgr" event={"ID":"9e4de4ee-63b6-4618-9a56-403f6de86bc0","Type":"ContainerDied","Data":"94a51338aeac8293708efd020be74163c91f1526083df2e2620e7cd2a3884c48"} Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.036260 4865 scope.go:117] "RemoveContainer" containerID="47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.089850 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dr6\" (UniqueName: \"kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6\") pod \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.090008 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content\") pod \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.090048 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities\") pod \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\" (UID: \"9e4de4ee-63b6-4618-9a56-403f6de86bc0\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.093717 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities" (OuterVolumeSpecName: "utilities") pod "9e4de4ee-63b6-4618-9a56-403f6de86bc0" (UID: "9e4de4ee-63b6-4618-9a56-403f6de86bc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.113779 4865 scope.go:117] "RemoveContainer" containerID="3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.116690 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6" (OuterVolumeSpecName: "kube-api-access-s2dr6") pod "9e4de4ee-63b6-4618-9a56-403f6de86bc0" (UID: "9e4de4ee-63b6-4618-9a56-403f6de86bc0"). InnerVolumeSpecName "kube-api-access-s2dr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.170276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e4de4ee-63b6-4618-9a56-403f6de86bc0" (UID: "9e4de4ee-63b6-4618-9a56-403f6de86bc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.183730 4865 scope.go:117] "RemoveContainer" containerID="d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.192684 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.192723 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4de4ee-63b6-4618-9a56-403f6de86bc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.192737 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dr6\" (UniqueName: \"kubernetes.io/projected/9e4de4ee-63b6-4618-9a56-403f6de86bc0-kube-api-access-s2dr6\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.208452 4865 scope.go:117] "RemoveContainer" containerID="47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f" Jan 03 04:43:30 crc kubenswrapper[4865]: E0103 04:43:30.209222 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f\": container with ID starting with 47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f not found: ID does not exist" containerID="47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.209265 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f"} err="failed to get container status \"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f\": rpc error: code = NotFound desc = could not find container \"47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f\": container with ID starting with 47981ab3b5b7bd34ad06132c33c32d547d4486df27369973d7d9e5eb150ec67f not found: ID does not exist" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.209292 4865 scope.go:117] "RemoveContainer" containerID="3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7" Jan 03 04:43:30 crc kubenswrapper[4865]: E0103 04:43:30.209626 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7\": container with ID starting with 3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7 not found: ID does not exist" containerID="3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.209683 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7"} err="failed to get container status \"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7\": rpc error: code = NotFound desc = could not find container \"3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7\": container with ID starting with 3c4ef883d546a10f48d160548b14a324a5603e6bce0927020c1d9decd3b1e3f7 not found: ID does not exist" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.209727 4865 scope.go:117] "RemoveContainer" containerID="d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3" Jan 03 04:43:30 crc kubenswrapper[4865]: E0103 04:43:30.210900 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3\": container with ID starting with d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3 not found: ID does not exist" containerID="d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.210926 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3"} err="failed to get container status \"d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3\": rpc error: code = NotFound desc = could not find container \"d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3\": container with ID starting with d958bc34013efa948a21a96340586999cd6835cd38b808801c2dab6c3d432db3 not found: ID does not exist" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.377302 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.394377 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khkgr"] Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.474714 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.501110 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory\") pod \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.501223 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw\") pod \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.501350 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key\") pod \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.501671 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle\") pod \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\" (UID: \"32245d9a-04a2-4ee3-99ae-6c876313c5a1\") " Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.508242 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "32245d9a-04a2-4ee3-99ae-6c876313c5a1" (UID: "32245d9a-04a2-4ee3-99ae-6c876313c5a1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.510696 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw" (OuterVolumeSpecName: "kube-api-access-669dw") pod "32245d9a-04a2-4ee3-99ae-6c876313c5a1" (UID: "32245d9a-04a2-4ee3-99ae-6c876313c5a1"). InnerVolumeSpecName "kube-api-access-669dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.537576 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32245d9a-04a2-4ee3-99ae-6c876313c5a1" (UID: "32245d9a-04a2-4ee3-99ae-6c876313c5a1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.548081 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory" (OuterVolumeSpecName: "inventory") pod "32245d9a-04a2-4ee3-99ae-6c876313c5a1" (UID: "32245d9a-04a2-4ee3-99ae-6c876313c5a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.603833 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.603865 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.603874 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/32245d9a-04a2-4ee3-99ae-6c876313c5a1-kube-api-access-669dw\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:30 crc kubenswrapper[4865]: I0103 04:43:30.603885 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32245d9a-04a2-4ee3-99ae-6c876313c5a1-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.047352 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" event={"ID":"32245d9a-04a2-4ee3-99ae-6c876313c5a1","Type":"ContainerDied","Data":"3c0ac55e0ee23bcfc1a91dbb4a026330bf238a4c99556dc3e3c65361d87e2d26"} Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.047411 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0ac55e0ee23bcfc1a91dbb4a026330bf238a4c99556dc3e3c65361d87e2d26" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.049151 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.146736 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz"] Jan 03 04:43:31 crc kubenswrapper[4865]: E0103 04:43:31.147605 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="extract-content" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147619 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="extract-content" Jan 03 04:43:31 crc kubenswrapper[4865]: E0103 04:43:31.147637 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32245d9a-04a2-4ee3-99ae-6c876313c5a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147644 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="32245d9a-04a2-4ee3-99ae-6c876313c5a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 04:43:31 crc kubenswrapper[4865]: E0103 04:43:31.147660 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="registry-server" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147667 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="registry-server" Jan 03 04:43:31 crc kubenswrapper[4865]: E0103 04:43:31.147686 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="extract-utilities" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147692 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="extract-utilities" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147860 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="32245d9a-04a2-4ee3-99ae-6c876313c5a1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.147877 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" containerName="registry-server" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.149533 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.153974 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.154572 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.155034 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.162320 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.188075 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4de4ee-63b6-4618-9a56-403f6de86bc0" path="/var/lib/kubelet/pods/9e4de4ee-63b6-4618-9a56-403f6de86bc0/volumes" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.188894 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz"] Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.217459 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.217656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrjd\" (UniqueName: \"kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.217742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.319897 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrjd\" (UniqueName: \"kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.320300 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.320357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.324078 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.324791 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.335449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrjd\" (UniqueName: \"kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:31 crc kubenswrapper[4865]: I0103 04:43:31.514754 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:43:32 crc kubenswrapper[4865]: I0103 04:43:32.153135 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz"] Jan 03 04:43:32 crc kubenswrapper[4865]: W0103 04:43:32.171518 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod012b0a73_ea86_4b62_aad3_f6b4f63a32bc.slice/crio-b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5 WatchSource:0}: Error finding container b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5: Status 404 returned error can't find the container with id b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5 Jan 03 04:43:33 crc kubenswrapper[4865]: I0103 04:43:33.073315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" event={"ID":"012b0a73-ea86-4b62-aad3-f6b4f63a32bc","Type":"ContainerStarted","Data":"1626ee830958030f232e663a92f16360767c428f4cfce2271f57e27baf082c5b"} Jan 03 04:43:33 crc kubenswrapper[4865]: I0103 04:43:33.074016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" event={"ID":"012b0a73-ea86-4b62-aad3-f6b4f63a32bc","Type":"ContainerStarted","Data":"b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5"} Jan 03 04:43:33 crc kubenswrapper[4865]: I0103 04:43:33.093821 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" podStartSLOduration=1.493016263 podStartE2EDuration="2.09380163s" podCreationTimestamp="2026-01-03 04:43:31 +0000 UTC" firstStartedPulling="2026-01-03 04:43:32.174932156 +0000 UTC m=+1639.291985351" lastFinishedPulling="2026-01-03 04:43:32.775717533 +0000 UTC m=+1639.892770718" observedRunningTime="2026-01-03 04:43:33.088140617 +0000 UTC m=+1640.205193802" watchObservedRunningTime="2026-01-03 04:43:33.09380163 +0000 UTC m=+1640.210854815" Jan 03 04:43:40 crc kubenswrapper[4865]: I0103 04:43:40.739923 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:43:40 crc kubenswrapper[4865]: I0103 04:43:40.740750 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.061366 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-m62nb"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.096809 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s7bzd"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.105695 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd23-account-create-update-xkg9s"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.113921 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-22a2-account-create-update-8gxc5"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.124251 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-m62nb"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.135519 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s7bzd"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.145557 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-22a2-account-create-update-8gxc5"] Jan 03 04:44:04 crc kubenswrapper[4865]: I0103 04:44:04.154829 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cd23-account-create-update-xkg9s"] Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.032964 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m26q5"] Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.042217 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-61cb-account-create-update-5r7hx"] Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.050715 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m26q5"] Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.059774 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-61cb-account-create-update-5r7hx"] Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.165631 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059d7a19-c549-4eeb-bcbb-6be0e69475e6" path="/var/lib/kubelet/pods/059d7a19-c549-4eeb-bcbb-6be0e69475e6/volumes" Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.166557 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87250d2e-2d43-478a-9500-33cc335bca50" path="/var/lib/kubelet/pods/87250d2e-2d43-478a-9500-33cc335bca50/volumes" Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.167234 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c0b5b3-662d-4b55-a743-0e8652bd72b3" path="/var/lib/kubelet/pods/94c0b5b3-662d-4b55-a743-0e8652bd72b3/volumes" Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.167970 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d77c8a-eaac-4c29-8006-66c3882e909f" path="/var/lib/kubelet/pods/e4d77c8a-eaac-4c29-8006-66c3882e909f/volumes" Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.169266 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e945d4e3-9ccb-449d-880c-3ef6ea90048c" path="/var/lib/kubelet/pods/e945d4e3-9ccb-449d-880c-3ef6ea90048c/volumes" Jan 03 04:44:05 crc kubenswrapper[4865]: I0103 04:44:05.169988 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22" path="/var/lib/kubelet/pods/f46abc38-3d62-4fcc-8dfe-13bbb6f9bc22/volumes" Jan 03 04:44:10 crc kubenswrapper[4865]: I0103 04:44:10.739576 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:44:10 crc kubenswrapper[4865]: I0103 04:44:10.740247 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:44:10 crc kubenswrapper[4865]: I0103 04:44:10.740300 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:44:10 crc kubenswrapper[4865]: I0103 04:44:10.741140 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:44:10 crc kubenswrapper[4865]: I0103 04:44:10.741204 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" gracePeriod=600 Jan 03 04:44:10 crc kubenswrapper[4865]: E0103 04:44:10.873199 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:44:11 crc kubenswrapper[4865]: I0103 04:44:11.516666 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" exitCode=0 Jan 03 04:44:11 crc kubenswrapper[4865]: I0103 04:44:11.516759 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e"} Jan 03 04:44:11 crc kubenswrapper[4865]: I0103 04:44:11.517200 4865 scope.go:117] "RemoveContainer" containerID="f70e7e3b6f466cd92b76640e9a405cdf202ff8ab85d90bf5c8de1a794992f21a" Jan 03 04:44:11 crc kubenswrapper[4865]: I0103 04:44:11.518111 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:44:11 crc kubenswrapper[4865]: E0103 04:44:11.518784 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:44:25 crc kubenswrapper[4865]: I0103 04:44:25.156434 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:44:25 crc kubenswrapper[4865]: E0103 04:44:25.157376 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.048756 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3250-account-create-update-vwhqs"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.056250 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3a49-account-create-update-m8d4l"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.065837 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xgsl9"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.078651 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-71cb-account-create-update-68wkg"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.088882 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ngq5m"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.101243 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rft58"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.112092 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-wkxxn"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.122529 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xgsl9"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.131972 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3250-account-create-update-vwhqs"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.153814 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rft58"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.171745 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-71cb-account-create-update-68wkg"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.179430 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3a49-account-create-update-m8d4l"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.189354 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ngq5m"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.201811 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-wkxxn"] Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.790089 4865 scope.go:117] "RemoveContainer" containerID="9199f235fa9d3a7c99bcdeb94581b4181ab201ca0eab0e8d19871ce819061ef3" Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.811149 4865 scope.go:117] "RemoveContainer" containerID="30dc16f8afb38cd17fe22c23263144ce9919da9ca4427de44bb4da8ff036ae26" Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.884199 4865 scope.go:117] "RemoveContainer" containerID="953da7c959bc285bd197c1a16fb16bed719f8c662ea8ab8dc4e2272298266edc" Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.913649 4865 scope.go:117] "RemoveContainer" containerID="7806a2836b7bd2dcf37c505b6e3febcaa66944edf94a65548d71d5031a035303" Jan 03 04:44:34 crc kubenswrapper[4865]: I0103 04:44:34.974286 4865 scope.go:117] "RemoveContainer" containerID="f1efceb784003c065e4ff5445f9037f7bee1b4be7904303d0c83055685aa6030" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.020431 4865 scope.go:117] "RemoveContainer" containerID="4832be279f13117c576ccd6cf6b34bc53465d68df3c764b92c1b1d4fe611fa29" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.176588 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f96647-4c0b-483e-b0aa-a2aa6069ef3c" path="/var/lib/kubelet/pods/20f96647-4c0b-483e-b0aa-a2aa6069ef3c/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.178119 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38befb89-f7ad-4e15-bfa9-d54cb0595e97" path="/var/lib/kubelet/pods/38befb89-f7ad-4e15-bfa9-d54cb0595e97/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.179546 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ba6a0b-8b85-4722-8be7-1d059c17f147" path="/var/lib/kubelet/pods/56ba6a0b-8b85-4722-8be7-1d059c17f147/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.180821 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a" path="/var/lib/kubelet/pods/9e1d21f6-4e5a-407d-be7d-ca8d2fcdf24a/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.182904 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68fb97f-4d5d-4258-bd60-882e5aa61ba7" path="/var/lib/kubelet/pods/c68fb97f-4d5d-4258-bd60-882e5aa61ba7/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.184307 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db440cf4-6919-4524-a661-1ce3b3f009b0" path="/var/lib/kubelet/pods/db440cf4-6919-4524-a661-1ce3b3f009b0/volumes" Jan 03 04:44:35 crc kubenswrapper[4865]: I0103 04:44:35.185778 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb717539-b1e5-4a67-97d4-2632d4e7fd7e" path="/var/lib/kubelet/pods/fb717539-b1e5-4a67-97d4-2632d4e7fd7e/volumes" Jan 03 04:44:37 crc kubenswrapper[4865]: I0103 04:44:37.156505 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:44:37 crc kubenswrapper[4865]: E0103 04:44:37.157642 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:44:40 crc kubenswrapper[4865]: I0103 04:44:40.037996 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-lcnz6"] Jan 03 04:44:40 crc kubenswrapper[4865]: I0103 04:44:40.053270 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-lcnz6"] Jan 03 04:44:41 crc kubenswrapper[4865]: I0103 04:44:41.194731 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acdff76-f952-437b-a36d-d1469377e304" path="/var/lib/kubelet/pods/0acdff76-f952-437b-a36d-d1469377e304/volumes" Jan 03 04:44:48 crc kubenswrapper[4865]: I0103 04:44:48.156234 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:44:48 crc kubenswrapper[4865]: E0103 04:44:48.157000 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.169749 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb"] Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.175014 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.178089 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.178090 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.188990 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb"] Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.277831 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fn9x\" (UniqueName: \"kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.277995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.278486 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.380019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.380178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fn9x\" (UniqueName: \"kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.380242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.381199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.393126 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.404007 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fn9x\" (UniqueName: \"kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x\") pod \"collect-profiles-29456925-phpqb\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.510015 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:00 crc kubenswrapper[4865]: I0103 04:45:00.995401 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb"] Jan 03 04:45:01 crc kubenswrapper[4865]: I0103 04:45:01.084213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" event={"ID":"73d9956d-43dd-4501-83e0-c576b055e696","Type":"ContainerStarted","Data":"d286af49168ce7c473c452c3417f213c82751fa9bba824ec26e6ece61edf5af3"} Jan 03 04:45:02 crc kubenswrapper[4865]: I0103 04:45:02.102575 4865 generic.go:334] "Generic (PLEG): container finished" podID="73d9956d-43dd-4501-83e0-c576b055e696" containerID="fa747b1972c46801b1666cdba81273ef33068b96bda07bd744f30165ce7eb9c5" exitCode=0 Jan 03 04:45:02 crc kubenswrapper[4865]: I0103 04:45:02.102658 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" event={"ID":"73d9956d-43dd-4501-83e0-c576b055e696","Type":"ContainerDied","Data":"fa747b1972c46801b1666cdba81273ef33068b96bda07bd744f30165ce7eb9c5"} Jan 03 04:45:02 crc kubenswrapper[4865]: I0103 04:45:02.155562 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:45:02 crc kubenswrapper[4865]: E0103 04:45:02.155940 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.489843 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.556321 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fn9x\" (UniqueName: \"kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x\") pod \"73d9956d-43dd-4501-83e0-c576b055e696\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.556549 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume\") pod \"73d9956d-43dd-4501-83e0-c576b055e696\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.556849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume\") pod \"73d9956d-43dd-4501-83e0-c576b055e696\" (UID: \"73d9956d-43dd-4501-83e0-c576b055e696\") " Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.557342 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume" (OuterVolumeSpecName: "config-volume") pod "73d9956d-43dd-4501-83e0-c576b055e696" (UID: "73d9956d-43dd-4501-83e0-c576b055e696"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.557515 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d9956d-43dd-4501-83e0-c576b055e696-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.562309 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "73d9956d-43dd-4501-83e0-c576b055e696" (UID: "73d9956d-43dd-4501-83e0-c576b055e696"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.564746 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x" (OuterVolumeSpecName: "kube-api-access-6fn9x") pod "73d9956d-43dd-4501-83e0-c576b055e696" (UID: "73d9956d-43dd-4501-83e0-c576b055e696"). InnerVolumeSpecName "kube-api-access-6fn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.658864 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fn9x\" (UniqueName: \"kubernetes.io/projected/73d9956d-43dd-4501-83e0-c576b055e696-kube-api-access-6fn9x\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:03 crc kubenswrapper[4865]: I0103 04:45:03.658898 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73d9956d-43dd-4501-83e0-c576b055e696-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:04 crc kubenswrapper[4865]: I0103 04:45:04.125285 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" event={"ID":"73d9956d-43dd-4501-83e0-c576b055e696","Type":"ContainerDied","Data":"d286af49168ce7c473c452c3417f213c82751fa9bba824ec26e6ece61edf5af3"} Jan 03 04:45:04 crc kubenswrapper[4865]: I0103 04:45:04.125333 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d286af49168ce7c473c452c3417f213c82751fa9bba824ec26e6ece61edf5af3" Jan 03 04:45:04 crc kubenswrapper[4865]: I0103 04:45:04.125517 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb" Jan 03 04:45:13 crc kubenswrapper[4865]: I0103 04:45:13.048929 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gdpqx"] Jan 03 04:45:13 crc kubenswrapper[4865]: I0103 04:45:13.059659 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gdpqx"] Jan 03 04:45:13 crc kubenswrapper[4865]: I0103 04:45:13.166121 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89004a40-1d1d-46ec-a342-a067fb1eaa54" path="/var/lib/kubelet/pods/89004a40-1d1d-46ec-a342-a067fb1eaa54/volumes" Jan 03 04:45:15 crc kubenswrapper[4865]: I0103 04:45:15.156269 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:45:15 crc kubenswrapper[4865]: E0103 04:45:15.157184 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:45:17 crc kubenswrapper[4865]: I0103 04:45:17.252732 4865 generic.go:334] "Generic (PLEG): container finished" podID="012b0a73-ea86-4b62-aad3-f6b4f63a32bc" containerID="1626ee830958030f232e663a92f16360767c428f4cfce2271f57e27baf082c5b" exitCode=0 Jan 03 04:45:17 crc kubenswrapper[4865]: I0103 04:45:17.252890 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" event={"ID":"012b0a73-ea86-4b62-aad3-f6b4f63a32bc","Type":"ContainerDied","Data":"1626ee830958030f232e663a92f16360767c428f4cfce2271f57e27baf082c5b"} Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.756834 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.869830 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory\") pod \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.869916 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrrjd\" (UniqueName: \"kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd\") pod \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.869949 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key\") pod \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\" (UID: \"012b0a73-ea86-4b62-aad3-f6b4f63a32bc\") " Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.877052 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd" (OuterVolumeSpecName: "kube-api-access-vrrjd") pod "012b0a73-ea86-4b62-aad3-f6b4f63a32bc" (UID: "012b0a73-ea86-4b62-aad3-f6b4f63a32bc"). InnerVolumeSpecName "kube-api-access-vrrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.896099 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "012b0a73-ea86-4b62-aad3-f6b4f63a32bc" (UID: "012b0a73-ea86-4b62-aad3-f6b4f63a32bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.904884 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory" (OuterVolumeSpecName: "inventory") pod "012b0a73-ea86-4b62-aad3-f6b4f63a32bc" (UID: "012b0a73-ea86-4b62-aad3-f6b4f63a32bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.972205 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.972236 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:18 crc kubenswrapper[4865]: I0103 04:45:18.972246 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrrjd\" (UniqueName: \"kubernetes.io/projected/012b0a73-ea86-4b62-aad3-f6b4f63a32bc-kube-api-access-vrrjd\") on node \"crc\" DevicePath \"\"" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.282528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" event={"ID":"012b0a73-ea86-4b62-aad3-f6b4f63a32bc","Type":"ContainerDied","Data":"b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5"} Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.282569 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a75eb95b54772db4a5c3b2028800d4f95c24b4a12e9002c8fff20749b374d5" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.282604 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.366758 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk"] Jan 03 04:45:19 crc kubenswrapper[4865]: E0103 04:45:19.367240 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d9956d-43dd-4501-83e0-c576b055e696" containerName="collect-profiles" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.367264 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d9956d-43dd-4501-83e0-c576b055e696" containerName="collect-profiles" Jan 03 04:45:19 crc kubenswrapper[4865]: E0103 04:45:19.367278 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012b0a73-ea86-4b62-aad3-f6b4f63a32bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.367287 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="012b0a73-ea86-4b62-aad3-f6b4f63a32bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.367536 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="012b0a73-ea86-4b62-aad3-f6b4f63a32bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.367564 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d9956d-43dd-4501-83e0-c576b055e696" containerName="collect-profiles" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.368345 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.371323 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.371577 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.371869 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.380621 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk"] Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.426812 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.481332 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wttg\" (UniqueName: \"kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.481487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.481527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.583532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wttg\" (UniqueName: \"kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.583925 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.583957 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.588148 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.590321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.602490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wttg\" (UniqueName: \"kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:19 crc kubenswrapper[4865]: I0103 04:45:19.729915 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:45:20 crc kubenswrapper[4865]: I0103 04:45:20.284586 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk"] Jan 03 04:45:21 crc kubenswrapper[4865]: I0103 04:45:21.303348 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" event={"ID":"1811dd2a-9abd-466c-8c53-992d887c9321","Type":"ContainerStarted","Data":"77a8621741e30b5719e867cb3e2882f31a1400eeb0b717eabe37f905b627e2c7"} Jan 03 04:45:22 crc kubenswrapper[4865]: I0103 04:45:22.316602 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" event={"ID":"1811dd2a-9abd-466c-8c53-992d887c9321","Type":"ContainerStarted","Data":"564b767685ee5eccaed3359a72a9d52c1181352e1c1a2dddac15ee44fcec005c"} Jan 03 04:45:22 crc kubenswrapper[4865]: I0103 04:45:22.352357 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" podStartSLOduration=2.613674305 podStartE2EDuration="3.352329712s" podCreationTimestamp="2026-01-03 04:45:19 +0000 UTC" firstStartedPulling="2026-01-03 04:45:20.297479148 +0000 UTC m=+1747.414532333" lastFinishedPulling="2026-01-03 04:45:21.036134555 +0000 UTC m=+1748.153187740" observedRunningTime="2026-01-03 04:45:22.349854705 +0000 UTC m=+1749.466907910" watchObservedRunningTime="2026-01-03 04:45:22.352329712 +0000 UTC m=+1749.469382927" Jan 03 04:45:26 crc kubenswrapper[4865]: I0103 04:45:26.054485 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lqmx2"] Jan 03 04:45:26 crc kubenswrapper[4865]: I0103 04:45:26.063331 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lqmx2"] Jan 03 04:45:27 crc kubenswrapper[4865]: I0103 04:45:27.174798 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d01ded-38d8-4af2-96ef-ba4a3f290f9c" path="/var/lib/kubelet/pods/c7d01ded-38d8-4af2-96ef-ba4a3f290f9c/volumes" Jan 03 04:45:30 crc kubenswrapper[4865]: I0103 04:45:30.156492 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:45:30 crc kubenswrapper[4865]: E0103 04:45:30.157802 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:45:31 crc kubenswrapper[4865]: I0103 04:45:31.029554 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fhrp6"] Jan 03 04:45:31 crc kubenswrapper[4865]: I0103 04:45:31.042040 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fhrp6"] Jan 03 04:45:31 crc kubenswrapper[4865]: I0103 04:45:31.165486 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49719dec-6060-4d9b-ad15-fbeac83d7ab1" path="/var/lib/kubelet/pods/49719dec-6060-4d9b-ad15-fbeac83d7ab1/volumes" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.037802 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m8g98"] Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.052956 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m8g98"] Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.146346 4865 scope.go:117] "RemoveContainer" containerID="828a08042a0b43b49d7f5c253155a3e741f7700266797d021f2d501dfdcc8625" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.172166 4865 scope.go:117] "RemoveContainer" containerID="2a1f8f0b04457a081dc0b02ec2ed4189dc8ccab348be17523df59c84674e75cb" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.190285 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4902a49-2ac7-4172-9f70-b4b14dfb7d67" path="/var/lib/kubelet/pods/d4902a49-2ac7-4172-9f70-b4b14dfb7d67/volumes" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.241358 4865 scope.go:117] "RemoveContainer" containerID="c2bd731edf6964af3b3b09f7bd761645a291c31207168a1dd85d67e538b2b57e" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.308298 4865 scope.go:117] "RemoveContainer" containerID="83d68e3c4117fd1e543595bf10a312626dab4f8d661f56373d489df5c28d0e31" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.340084 4865 scope.go:117] "RemoveContainer" containerID="e691228bd9c8a7b0088726cba7bb4717e82caa24c0cccd9750d6da634a082b38" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.372048 4865 scope.go:117] "RemoveContainer" containerID="9fc856c31e68579a136c1c8dfb4dce21c98431c2b57815f797e5663e4b35c1eb" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.406788 4865 scope.go:117] "RemoveContainer" containerID="8b2928d909656017cdf32ea61094feb8f81b6a8956aa5d9a6b86ad93fbba6522" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.431203 4865 scope.go:117] "RemoveContainer" containerID="4e6c25341a62a4b9b1c6b6570b8897690dc69b8763ae425de8a5b1df557f031a" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.482427 4865 scope.go:117] "RemoveContainer" containerID="c0fdf93a2f8e30903b52595c780ccd9edd80cbdc5b0eceedb849b0ba9250799a" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.506621 4865 scope.go:117] "RemoveContainer" containerID="0c70a0b9453adcdc529eda92ad38e7eb23c3f27fe685338ce5cb6f9424521a78" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.526781 4865 scope.go:117] "RemoveContainer" containerID="d05cc379d6f12cb9798baadfdd28c0404c0749045f583f31c7748e0c6f3fc4e4" Jan 03 04:45:35 crc kubenswrapper[4865]: I0103 04:45:35.563212 4865 scope.go:117] "RemoveContainer" containerID="a8151f80e3f0bfd14c0e8015fac2ad0a5ec57c380d318ca41181a991a4fe56bc" Jan 03 04:45:39 crc kubenswrapper[4865]: I0103 04:45:39.041835 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6rcv2"] Jan 03 04:45:39 crc kubenswrapper[4865]: I0103 04:45:39.058976 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6rcv2"] Jan 03 04:45:39 crc kubenswrapper[4865]: I0103 04:45:39.172625 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16da42a-8750-476c-abdf-8054eca2694a" path="/var/lib/kubelet/pods/d16da42a-8750-476c-abdf-8054eca2694a/volumes" Jan 03 04:45:41 crc kubenswrapper[4865]: I0103 04:45:41.030600 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ll974"] Jan 03 04:45:41 crc kubenswrapper[4865]: I0103 04:45:41.039495 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ll974"] Jan 03 04:45:41 crc kubenswrapper[4865]: I0103 04:45:41.168321 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394d36aa-4f2f-4f5f-a904-1fb372f2de27" path="/var/lib/kubelet/pods/394d36aa-4f2f-4f5f-a904-1fb372f2de27/volumes" Jan 03 04:45:44 crc kubenswrapper[4865]: I0103 04:45:44.155555 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:45:44 crc kubenswrapper[4865]: E0103 04:45:44.156339 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:45:56 crc kubenswrapper[4865]: I0103 04:45:56.156800 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:45:56 crc kubenswrapper[4865]: E0103 04:45:56.157954 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:46:09 crc kubenswrapper[4865]: I0103 04:46:09.156572 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:46:09 crc kubenswrapper[4865]: E0103 04:46:09.157340 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:46:21 crc kubenswrapper[4865]: I0103 04:46:21.155677 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:46:21 crc kubenswrapper[4865]: E0103 04:46:21.156404 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:46:24 crc kubenswrapper[4865]: I0103 04:46:24.068656 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5tjfp"] Jan 03 04:46:24 crc kubenswrapper[4865]: I0103 04:46:24.084233 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5tjfp"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.041586 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db47-account-create-update-9bncr"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.049891 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b0b1-account-create-update-mlj6w"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.064351 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db47-account-create-update-9bncr"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.076836 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-phgms"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.088196 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b0b1-account-create-update-mlj6w"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.099548 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-phgms"] Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.169330 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d8ce9c-3980-4c17-b824-ee567eb03edd" path="/var/lib/kubelet/pods/49d8ce9c-3980-4c17-b824-ee567eb03edd/volumes" Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.170104 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c15343b-751f-4ae7-8f7b-6fc5714d4d16" path="/var/lib/kubelet/pods/4c15343b-751f-4ae7-8f7b-6fc5714d4d16/volumes" Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.170817 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5340eb93-db7a-4d2f-b33c-c3f5913c12cc" path="/var/lib/kubelet/pods/5340eb93-db7a-4d2f-b33c-c3f5913c12cc/volumes" Jan 03 04:46:25 crc kubenswrapper[4865]: I0103 04:46:25.171554 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d88ffb-d26d-4f63-980c-4313f401541c" path="/var/lib/kubelet/pods/e4d88ffb-d26d-4f63-980c-4313f401541c/volumes" Jan 03 04:46:26 crc kubenswrapper[4865]: I0103 04:46:26.040524 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1d07-account-create-update-m8fgm"] Jan 03 04:46:26 crc kubenswrapper[4865]: I0103 04:46:26.057884 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4wmtx"] Jan 03 04:46:26 crc kubenswrapper[4865]: I0103 04:46:26.086943 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1d07-account-create-update-m8fgm"] Jan 03 04:46:26 crc kubenswrapper[4865]: I0103 04:46:26.103592 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4wmtx"] Jan 03 04:46:27 crc kubenswrapper[4865]: I0103 04:46:27.169216 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fe916e-b334-4fb6-9e63-8491fa16cffc" path="/var/lib/kubelet/pods/65fe916e-b334-4fb6-9e63-8491fa16cffc/volumes" Jan 03 04:46:27 crc kubenswrapper[4865]: I0103 04:46:27.170464 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7b3518-8b72-4c31-8232-98f2fd0d4966" path="/var/lib/kubelet/pods/7b7b3518-8b72-4c31-8232-98f2fd0d4966/volumes" Jan 03 04:46:33 crc kubenswrapper[4865]: I0103 04:46:33.168034 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:46:33 crc kubenswrapper[4865]: E0103 04:46:33.169050 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:46:35 crc kubenswrapper[4865]: I0103 04:46:35.832113 4865 scope.go:117] "RemoveContainer" containerID="888ce525b5cb51762119ce0807fe0b4f18a44666c63fe3170dd19697dec3bce2" Jan 03 04:46:35 crc kubenswrapper[4865]: I0103 04:46:35.866400 4865 scope.go:117] "RemoveContainer" containerID="d163b9dd6c3d565976bc6cde2e6995b55277335a94fb0d5d3fb7e0adf7050300" Jan 03 04:46:35 crc kubenswrapper[4865]: I0103 04:46:35.926849 4865 scope.go:117] "RemoveContainer" containerID="3e705aeec1985a870106af6ddcd50d97528e28eafa6b50b51969f029259c1b4f" Jan 03 04:46:35 crc kubenswrapper[4865]: I0103 04:46:35.977043 4865 scope.go:117] "RemoveContainer" containerID="372d9ac7e02cba3b2d3ec7becc1719b6eda049c584aa730c77ff084618c867a4" Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.044457 4865 scope.go:117] "RemoveContainer" containerID="b0289d137493299d875eff99d5805eb8dce604ebf81f0de5dd83d65764fff223" Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.102435 4865 scope.go:117] "RemoveContainer" containerID="fd757cee28b8e6b3a180c118d7af6c71a7dd9d0556a61e4c3b5e9191540eb5bb" Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.105285 4865 generic.go:334] "Generic (PLEG): container finished" podID="1811dd2a-9abd-466c-8c53-992d887c9321" containerID="564b767685ee5eccaed3359a72a9d52c1181352e1c1a2dddac15ee44fcec005c" exitCode=0 Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.105363 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" event={"ID":"1811dd2a-9abd-466c-8c53-992d887c9321","Type":"ContainerDied","Data":"564b767685ee5eccaed3359a72a9d52c1181352e1c1a2dddac15ee44fcec005c"} Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.138609 4865 scope.go:117] "RemoveContainer" containerID="1cfc267180ee50c51f7d0b17fcbded1620bcd2dbab832a75eec41ab5020c8edd" Jan 03 04:46:36 crc kubenswrapper[4865]: I0103 04:46:36.158200 4865 scope.go:117] "RemoveContainer" containerID="31f9beaaf2b8d6137497567a7ce3c87ef472cd1b45aaee2c197282db87e34db3" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.628968 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.728235 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wttg\" (UniqueName: \"kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg\") pod \"1811dd2a-9abd-466c-8c53-992d887c9321\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.728456 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory\") pod \"1811dd2a-9abd-466c-8c53-992d887c9321\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.728539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key\") pod \"1811dd2a-9abd-466c-8c53-992d887c9321\" (UID: \"1811dd2a-9abd-466c-8c53-992d887c9321\") " Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.734168 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg" (OuterVolumeSpecName: "kube-api-access-4wttg") pod "1811dd2a-9abd-466c-8c53-992d887c9321" (UID: "1811dd2a-9abd-466c-8c53-992d887c9321"). InnerVolumeSpecName "kube-api-access-4wttg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.754207 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1811dd2a-9abd-466c-8c53-992d887c9321" (UID: "1811dd2a-9abd-466c-8c53-992d887c9321"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.776631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory" (OuterVolumeSpecName: "inventory") pod "1811dd2a-9abd-466c-8c53-992d887c9321" (UID: "1811dd2a-9abd-466c-8c53-992d887c9321"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.833225 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wttg\" (UniqueName: \"kubernetes.io/projected/1811dd2a-9abd-466c-8c53-992d887c9321-kube-api-access-4wttg\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.833274 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:37 crc kubenswrapper[4865]: I0103 04:46:37.833287 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1811dd2a-9abd-466c-8c53-992d887c9321-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.127043 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" event={"ID":"1811dd2a-9abd-466c-8c53-992d887c9321","Type":"ContainerDied","Data":"77a8621741e30b5719e867cb3e2882f31a1400eeb0b717eabe37f905b627e2c7"} Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.127084 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a8621741e30b5719e867cb3e2882f31a1400eeb0b717eabe37f905b627e2c7" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.127113 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.222762 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w"] Jan 03 04:46:38 crc kubenswrapper[4865]: E0103 04:46:38.223126 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1811dd2a-9abd-466c-8c53-992d887c9321" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.223139 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1811dd2a-9abd-466c-8c53-992d887c9321" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.223319 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1811dd2a-9abd-466c-8c53-992d887c9321" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.223893 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.228247 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.228281 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.228766 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.228857 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.234835 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w"] Jan 03 04:46:38 crc kubenswrapper[4865]: E0103 04:46:38.342682 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1811dd2a_9abd_466c_8c53_992d887c9321.slice/crio-77a8621741e30b5719e867cb3e2882f31a1400eeb0b717eabe37f905b627e2c7\": RecentStats: unable to find data in memory cache]" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.343915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjdk\" (UniqueName: \"kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.343996 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.344081 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.445997 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjdk\" (UniqueName: \"kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.446058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.446133 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.451721 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.451729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.464169 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjdk\" (UniqueName: \"kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:38 crc kubenswrapper[4865]: I0103 04:46:38.551590 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:39 crc kubenswrapper[4865]: I0103 04:46:39.095786 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w"] Jan 03 04:46:39 crc kubenswrapper[4865]: I0103 04:46:39.136356 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" event={"ID":"81ff7929-70dd-400a-ae25-8e7425e5a9ae","Type":"ContainerStarted","Data":"a73269334877edf6e101706e671b00cd85bc3cbadfddbe3c7ca2f18cc3c690f7"} Jan 03 04:46:40 crc kubenswrapper[4865]: I0103 04:46:40.146084 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" event={"ID":"81ff7929-70dd-400a-ae25-8e7425e5a9ae","Type":"ContainerStarted","Data":"6221fba3a4b3ecd6e5bb99432ff76cabb51ab70d3d7585f72ef7cf42bb408ab7"} Jan 03 04:46:40 crc kubenswrapper[4865]: I0103 04:46:40.169195 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" podStartSLOduration=1.539528508 podStartE2EDuration="2.169172916s" podCreationTimestamp="2026-01-03 04:46:38 +0000 UTC" firstStartedPulling="2026-01-03 04:46:39.100626127 +0000 UTC m=+1826.217679332" lastFinishedPulling="2026-01-03 04:46:39.730270545 +0000 UTC m=+1826.847323740" observedRunningTime="2026-01-03 04:46:40.161741784 +0000 UTC m=+1827.278794969" watchObservedRunningTime="2026-01-03 04:46:40.169172916 +0000 UTC m=+1827.286226101" Jan 03 04:46:44 crc kubenswrapper[4865]: I0103 04:46:44.155966 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:46:44 crc kubenswrapper[4865]: E0103 04:46:44.157849 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:46:45 crc kubenswrapper[4865]: I0103 04:46:45.199652 4865 generic.go:334] "Generic (PLEG): container finished" podID="81ff7929-70dd-400a-ae25-8e7425e5a9ae" containerID="6221fba3a4b3ecd6e5bb99432ff76cabb51ab70d3d7585f72ef7cf42bb408ab7" exitCode=0 Jan 03 04:46:45 crc kubenswrapper[4865]: I0103 04:46:45.199743 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" event={"ID":"81ff7929-70dd-400a-ae25-8e7425e5a9ae","Type":"ContainerDied","Data":"6221fba3a4b3ecd6e5bb99432ff76cabb51ab70d3d7585f72ef7cf42bb408ab7"} Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.709865 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.711155 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjdk\" (UniqueName: \"kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk\") pod \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.711225 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key\") pod \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.711363 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory\") pod \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\" (UID: \"81ff7929-70dd-400a-ae25-8e7425e5a9ae\") " Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.719481 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk" (OuterVolumeSpecName: "kube-api-access-7hjdk") pod "81ff7929-70dd-400a-ae25-8e7425e5a9ae" (UID: "81ff7929-70dd-400a-ae25-8e7425e5a9ae"). InnerVolumeSpecName "kube-api-access-7hjdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.755609 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81ff7929-70dd-400a-ae25-8e7425e5a9ae" (UID: "81ff7929-70dd-400a-ae25-8e7425e5a9ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.757542 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory" (OuterVolumeSpecName: "inventory") pod "81ff7929-70dd-400a-ae25-8e7425e5a9ae" (UID: "81ff7929-70dd-400a-ae25-8e7425e5a9ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.814015 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjdk\" (UniqueName: \"kubernetes.io/projected/81ff7929-70dd-400a-ae25-8e7425e5a9ae-kube-api-access-7hjdk\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.814052 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:46 crc kubenswrapper[4865]: I0103 04:46:46.814065 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81ff7929-70dd-400a-ae25-8e7425e5a9ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.223141 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" event={"ID":"81ff7929-70dd-400a-ae25-8e7425e5a9ae","Type":"ContainerDied","Data":"a73269334877edf6e101706e671b00cd85bc3cbadfddbe3c7ca2f18cc3c690f7"} Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.223175 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.223178 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73269334877edf6e101706e671b00cd85bc3cbadfddbe3c7ca2f18cc3c690f7" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.311749 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc"] Jan 03 04:46:47 crc kubenswrapper[4865]: E0103 04:46:47.312225 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff7929-70dd-400a-ae25-8e7425e5a9ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.312247 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff7929-70dd-400a-ae25-8e7425e5a9ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.312444 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff7929-70dd-400a-ae25-8e7425e5a9ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.313110 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.318345 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.318630 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.319355 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.319501 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.323764 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc"] Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.424505 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vllpv\" (UniqueName: \"kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.424693 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.424851 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.527206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vllpv\" (UniqueName: \"kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.527361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.527501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.533460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.549314 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.549902 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vllpv\" (UniqueName: \"kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6wxhc\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:47 crc kubenswrapper[4865]: I0103 04:46:47.643804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:46:48 crc kubenswrapper[4865]: I0103 04:46:48.001873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc"] Jan 03 04:46:48 crc kubenswrapper[4865]: I0103 04:46:48.235545 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" event={"ID":"bee03f5c-0eca-42a3-9d5d-ea38f06a775b","Type":"ContainerStarted","Data":"1e4cc637f1d1c5b88a2764f7916ed859dd294f36569d44fefaa142e62dead4f3"} Jan 03 04:46:49 crc kubenswrapper[4865]: I0103 04:46:49.245643 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" event={"ID":"bee03f5c-0eca-42a3-9d5d-ea38f06a775b","Type":"ContainerStarted","Data":"dcb72f32f8f2a6c239fe48e4558b5076d82aec7e7b54a589d6951b60c76eb262"} Jan 03 04:46:55 crc kubenswrapper[4865]: I0103 04:46:55.156102 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:46:55 crc kubenswrapper[4865]: E0103 04:46:55.157647 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:47:02 crc kubenswrapper[4865]: I0103 04:47:02.064487 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" podStartSLOduration=14.588587827 podStartE2EDuration="15.064450094s" podCreationTimestamp="2026-01-03 04:46:47 +0000 UTC" firstStartedPulling="2026-01-03 04:46:48.002936974 +0000 UTC m=+1835.119990159" lastFinishedPulling="2026-01-03 04:46:48.478799241 +0000 UTC m=+1835.595852426" observedRunningTime="2026-01-03 04:46:49.268974364 +0000 UTC m=+1836.386027569" watchObservedRunningTime="2026-01-03 04:47:02.064450094 +0000 UTC m=+1849.181503329" Jan 03 04:47:02 crc kubenswrapper[4865]: I0103 04:47:02.073665 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zwbkb"] Jan 03 04:47:02 crc kubenswrapper[4865]: I0103 04:47:02.085849 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zwbkb"] Jan 03 04:47:03 crc kubenswrapper[4865]: I0103 04:47:03.170684 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2155186b-b606-42e4-b728-f62c6c8b156a" path="/var/lib/kubelet/pods/2155186b-b606-42e4-b728-f62c6c8b156a/volumes" Jan 03 04:47:10 crc kubenswrapper[4865]: I0103 04:47:10.156998 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:47:10 crc kubenswrapper[4865]: E0103 04:47:10.157958 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:47:24 crc kubenswrapper[4865]: I0103 04:47:24.051975 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-w5zt8"] Jan 03 04:47:24 crc kubenswrapper[4865]: I0103 04:47:24.062174 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-w5zt8"] Jan 03 04:47:25 crc kubenswrapper[4865]: I0103 04:47:25.155878 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:47:25 crc kubenswrapper[4865]: E0103 04:47:25.156301 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:47:25 crc kubenswrapper[4865]: I0103 04:47:25.166936 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fc6ea4-75a9-461a-8828-226e95f04c2e" path="/var/lib/kubelet/pods/32fc6ea4-75a9-461a-8828-226e95f04c2e/volumes" Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.820915 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.824008 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.840834 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.924647 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.924706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5427\" (UniqueName: \"kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:29 crc kubenswrapper[4865]: I0103 04:47:29.924734 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.027211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.027269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5427\" (UniqueName: \"kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.027300 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.027862 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.028065 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.047494 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5427\" (UniqueName: \"kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427\") pod \"redhat-operators-l9sjq\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.148198 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.657746 4865 generic.go:334] "Generic (PLEG): container finished" podID="bee03f5c-0eca-42a3-9d5d-ea38f06a775b" containerID="dcb72f32f8f2a6c239fe48e4558b5076d82aec7e7b54a589d6951b60c76eb262" exitCode=0 Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.657895 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" event={"ID":"bee03f5c-0eca-42a3-9d5d-ea38f06a775b","Type":"ContainerDied","Data":"dcb72f32f8f2a6c239fe48e4558b5076d82aec7e7b54a589d6951b60c76eb262"} Jan 03 04:47:30 crc kubenswrapper[4865]: I0103 04:47:30.658058 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.029308 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xjsrz"] Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.038301 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xjsrz"] Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.166859 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a944f13-33eb-4c3c-906f-a091e2bc9655" path="/var/lib/kubelet/pods/1a944f13-33eb-4c3c-906f-a091e2bc9655/volumes" Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.666744 4865 generic.go:334] "Generic (PLEG): container finished" podID="9abefe02-7115-4355-a673-368d6b84ef80" containerID="4b9f61a854404982b3797b2959bc60e5a5400b663db8d28f3c21b7341496b123" exitCode=0 Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.667103 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerDied","Data":"4b9f61a854404982b3797b2959bc60e5a5400b663db8d28f3c21b7341496b123"} Jan 03 04:47:31 crc kubenswrapper[4865]: I0103 04:47:31.667169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerStarted","Data":"fd792bcec479efdb3c908f2e37afb98a8004700cd18f46fd66f8129461195a15"} Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.116967 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.165326 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vllpv\" (UniqueName: \"kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv\") pod \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.165409 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key\") pod \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.165743 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory\") pod \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\" (UID: \"bee03f5c-0eca-42a3-9d5d-ea38f06a775b\") " Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.183638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv" (OuterVolumeSpecName: "kube-api-access-vllpv") pod "bee03f5c-0eca-42a3-9d5d-ea38f06a775b" (UID: "bee03f5c-0eca-42a3-9d5d-ea38f06a775b"). InnerVolumeSpecName "kube-api-access-vllpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.202193 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory" (OuterVolumeSpecName: "inventory") pod "bee03f5c-0eca-42a3-9d5d-ea38f06a775b" (UID: "bee03f5c-0eca-42a3-9d5d-ea38f06a775b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.202809 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bee03f5c-0eca-42a3-9d5d-ea38f06a775b" (UID: "bee03f5c-0eca-42a3-9d5d-ea38f06a775b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.268860 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vllpv\" (UniqueName: \"kubernetes.io/projected/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-kube-api-access-vllpv\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.268890 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.268899 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bee03f5c-0eca-42a3-9d5d-ea38f06a775b-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.676346 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.676332 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6wxhc" event={"ID":"bee03f5c-0eca-42a3-9d5d-ea38f06a775b","Type":"ContainerDied","Data":"1e4cc637f1d1c5b88a2764f7916ed859dd294f36569d44fefaa142e62dead4f3"} Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.676422 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4cc637f1d1c5b88a2764f7916ed859dd294f36569d44fefaa142e62dead4f3" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.682521 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerStarted","Data":"4740b7a1dab90c6b87169db34151be9bc488597138bdc26a85be389882c89a03"} Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.776618 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4"] Jan 03 04:47:32 crc kubenswrapper[4865]: E0103 04:47:32.777668 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee03f5c-0eca-42a3-9d5d-ea38f06a775b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.777697 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee03f5c-0eca-42a3-9d5d-ea38f06a775b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.777939 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee03f5c-0eca-42a3-9d5d-ea38f06a775b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.778916 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.784661 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.784705 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.784926 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.784989 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.786948 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4"] Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.881730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.881786 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkf5h\" (UniqueName: \"kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.881973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.983212 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.983532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkf5h\" (UniqueName: \"kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.983776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.991449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:32 crc kubenswrapper[4865]: I0103 04:47:32.991535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:33 crc kubenswrapper[4865]: I0103 04:47:33.007029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkf5h\" (UniqueName: \"kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:33 crc kubenswrapper[4865]: I0103 04:47:33.095051 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:47:33 crc kubenswrapper[4865]: I0103 04:47:33.676208 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4"] Jan 03 04:47:33 crc kubenswrapper[4865]: W0103 04:47:33.681036 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65c55653_7592_4a07_bfc2_c6273437c99c.slice/crio-12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f WatchSource:0}: Error finding container 12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f: Status 404 returned error can't find the container with id 12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f Jan 03 04:47:33 crc kubenswrapper[4865]: I0103 04:47:33.698606 4865 generic.go:334] "Generic (PLEG): container finished" podID="9abefe02-7115-4355-a673-368d6b84ef80" containerID="4740b7a1dab90c6b87169db34151be9bc488597138bdc26a85be389882c89a03" exitCode=0 Jan 03 04:47:33 crc kubenswrapper[4865]: I0103 04:47:33.698651 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerDied","Data":"4740b7a1dab90c6b87169db34151be9bc488597138bdc26a85be389882c89a03"} Jan 03 04:47:34 crc kubenswrapper[4865]: I0103 04:47:34.710168 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" event={"ID":"65c55653-7592-4a07-bfc2-c6273437c99c","Type":"ContainerStarted","Data":"df63b05b9ed8a24d4b29fa0d97b83193affa08bc21221dcb44e092775214af32"} Jan 03 04:47:34 crc kubenswrapper[4865]: I0103 04:47:34.710553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" event={"ID":"65c55653-7592-4a07-bfc2-c6273437c99c","Type":"ContainerStarted","Data":"12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f"} Jan 03 04:47:34 crc kubenswrapper[4865]: I0103 04:47:34.714973 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerStarted","Data":"b25a23b8d872d3d63de1b4f32e60d8dd43705d34d6b50f666260ef5cd3e9e95f"} Jan 03 04:47:34 crc kubenswrapper[4865]: I0103 04:47:34.726914 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" podStartSLOduration=2.328818735 podStartE2EDuration="2.72689807s" podCreationTimestamp="2026-01-03 04:47:32 +0000 UTC" firstStartedPulling="2026-01-03 04:47:33.697373926 +0000 UTC m=+1880.814427111" lastFinishedPulling="2026-01-03 04:47:34.095453261 +0000 UTC m=+1881.212506446" observedRunningTime="2026-01-03 04:47:34.725474841 +0000 UTC m=+1881.842528046" watchObservedRunningTime="2026-01-03 04:47:34.72689807 +0000 UTC m=+1881.843951265" Jan 03 04:47:34 crc kubenswrapper[4865]: I0103 04:47:34.741716 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9sjq" podStartSLOduration=3.192382634 podStartE2EDuration="5.741693294s" podCreationTimestamp="2026-01-03 04:47:29 +0000 UTC" firstStartedPulling="2026-01-03 04:47:31.669594043 +0000 UTC m=+1878.786647228" lastFinishedPulling="2026-01-03 04:47:34.218904703 +0000 UTC m=+1881.335957888" observedRunningTime="2026-01-03 04:47:34.739267797 +0000 UTC m=+1881.856320992" watchObservedRunningTime="2026-01-03 04:47:34.741693294 +0000 UTC m=+1881.858746479" Jan 03 04:47:36 crc kubenswrapper[4865]: I0103 04:47:36.340146 4865 scope.go:117] "RemoveContainer" containerID="9f1bcf24558cd967053992df49d42fed51acb41bc9717547cb9662f1d8f29d1c" Jan 03 04:47:36 crc kubenswrapper[4865]: I0103 04:47:36.379023 4865 scope.go:117] "RemoveContainer" containerID="c7bad2c4ee62f3e3f80b2a3f615bd31e8d2d026d1bfe272216668d28ca4bf3f0" Jan 03 04:47:36 crc kubenswrapper[4865]: I0103 04:47:36.441467 4865 scope.go:117] "RemoveContainer" containerID="e648e330c026c6a1a165e4bbdaefa8b1ead787450b33718056c54a0b7ff6724c" Jan 03 04:47:37 crc kubenswrapper[4865]: I0103 04:47:37.155424 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:47:37 crc kubenswrapper[4865]: E0103 04:47:37.155827 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:47:40 crc kubenswrapper[4865]: I0103 04:47:40.148419 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:40 crc kubenswrapper[4865]: I0103 04:47:40.148796 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:40 crc kubenswrapper[4865]: I0103 04:47:40.195300 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:40 crc kubenswrapper[4865]: I0103 04:47:40.825436 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:40 crc kubenswrapper[4865]: I0103 04:47:40.887331 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:42 crc kubenswrapper[4865]: I0103 04:47:42.781081 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9sjq" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="registry-server" containerID="cri-o://b25a23b8d872d3d63de1b4f32e60d8dd43705d34d6b50f666260ef5cd3e9e95f" gracePeriod=2 Jan 03 04:47:43 crc kubenswrapper[4865]: I0103 04:47:43.799376 4865 generic.go:334] "Generic (PLEG): container finished" podID="9abefe02-7115-4355-a673-368d6b84ef80" containerID="b25a23b8d872d3d63de1b4f32e60d8dd43705d34d6b50f666260ef5cd3e9e95f" exitCode=0 Jan 03 04:47:43 crc kubenswrapper[4865]: I0103 04:47:43.799434 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerDied","Data":"b25a23b8d872d3d63de1b4f32e60d8dd43705d34d6b50f666260ef5cd3e9e95f"} Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.069904 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.211917 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5427\" (UniqueName: \"kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427\") pod \"9abefe02-7115-4355-a673-368d6b84ef80\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.212104 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content\") pod \"9abefe02-7115-4355-a673-368d6b84ef80\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.212185 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities\") pod \"9abefe02-7115-4355-a673-368d6b84ef80\" (UID: \"9abefe02-7115-4355-a673-368d6b84ef80\") " Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.213613 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities" (OuterVolumeSpecName: "utilities") pod "9abefe02-7115-4355-a673-368d6b84ef80" (UID: "9abefe02-7115-4355-a673-368d6b84ef80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.219754 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427" (OuterVolumeSpecName: "kube-api-access-l5427") pod "9abefe02-7115-4355-a673-368d6b84ef80" (UID: "9abefe02-7115-4355-a673-368d6b84ef80"). InnerVolumeSpecName "kube-api-access-l5427". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.314855 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.314912 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5427\" (UniqueName: \"kubernetes.io/projected/9abefe02-7115-4355-a673-368d6b84ef80-kube-api-access-l5427\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.343787 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9abefe02-7115-4355-a673-368d6b84ef80" (UID: "9abefe02-7115-4355-a673-368d6b84ef80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.417267 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9abefe02-7115-4355-a673-368d6b84ef80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.812618 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9sjq" event={"ID":"9abefe02-7115-4355-a673-368d6b84ef80","Type":"ContainerDied","Data":"fd792bcec479efdb3c908f2e37afb98a8004700cd18f46fd66f8129461195a15"} Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.812689 4865 scope.go:117] "RemoveContainer" containerID="b25a23b8d872d3d63de1b4f32e60d8dd43705d34d6b50f666260ef5cd3e9e95f" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.812783 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9sjq" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.845759 4865 scope.go:117] "RemoveContainer" containerID="4740b7a1dab90c6b87169db34151be9bc488597138bdc26a85be389882c89a03" Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.855200 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.863037 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9sjq"] Jan 03 04:47:44 crc kubenswrapper[4865]: I0103 04:47:44.870853 4865 scope.go:117] "RemoveContainer" containerID="4b9f61a854404982b3797b2959bc60e5a5400b663db8d28f3c21b7341496b123" Jan 03 04:47:45 crc kubenswrapper[4865]: I0103 04:47:45.164854 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abefe02-7115-4355-a673-368d6b84ef80" path="/var/lib/kubelet/pods/9abefe02-7115-4355-a673-368d6b84ef80/volumes" Jan 03 04:47:52 crc kubenswrapper[4865]: I0103 04:47:52.176249 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:47:52 crc kubenswrapper[4865]: E0103 04:47:52.177833 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:05 crc kubenswrapper[4865]: I0103 04:48:05.159037 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:48:05 crc kubenswrapper[4865]: E0103 04:48:05.159601 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:09 crc kubenswrapper[4865]: I0103 04:48:09.051766 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjjnb"] Jan 03 04:48:09 crc kubenswrapper[4865]: I0103 04:48:09.058035 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjjnb"] Jan 03 04:48:09 crc kubenswrapper[4865]: I0103 04:48:09.165218 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7635026-837e-4427-943e-d5de8b29c273" path="/var/lib/kubelet/pods/d7635026-837e-4427-943e-d5de8b29c273/volumes" Jan 03 04:48:19 crc kubenswrapper[4865]: I0103 04:48:19.155834 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:48:19 crc kubenswrapper[4865]: E0103 04:48:19.156588 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:31 crc kubenswrapper[4865]: I0103 04:48:31.313457 4865 generic.go:334] "Generic (PLEG): container finished" podID="65c55653-7592-4a07-bfc2-c6273437c99c" containerID="df63b05b9ed8a24d4b29fa0d97b83193affa08bc21221dcb44e092775214af32" exitCode=0 Jan 03 04:48:31 crc kubenswrapper[4865]: I0103 04:48:31.313701 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" event={"ID":"65c55653-7592-4a07-bfc2-c6273437c99c","Type":"ContainerDied","Data":"df63b05b9ed8a24d4b29fa0d97b83193affa08bc21221dcb44e092775214af32"} Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.795246 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.881514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key\") pod \"65c55653-7592-4a07-bfc2-c6273437c99c\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.881714 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkf5h\" (UniqueName: \"kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h\") pod \"65c55653-7592-4a07-bfc2-c6273437c99c\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.881862 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory\") pod \"65c55653-7592-4a07-bfc2-c6273437c99c\" (UID: \"65c55653-7592-4a07-bfc2-c6273437c99c\") " Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.891540 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h" (OuterVolumeSpecName: "kube-api-access-rkf5h") pod "65c55653-7592-4a07-bfc2-c6273437c99c" (UID: "65c55653-7592-4a07-bfc2-c6273437c99c"). InnerVolumeSpecName "kube-api-access-rkf5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.915500 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "65c55653-7592-4a07-bfc2-c6273437c99c" (UID: "65c55653-7592-4a07-bfc2-c6273437c99c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.916500 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory" (OuterVolumeSpecName: "inventory") pod "65c55653-7592-4a07-bfc2-c6273437c99c" (UID: "65c55653-7592-4a07-bfc2-c6273437c99c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.984279 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.984313 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkf5h\" (UniqueName: \"kubernetes.io/projected/65c55653-7592-4a07-bfc2-c6273437c99c-kube-api-access-rkf5h\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:32 crc kubenswrapper[4865]: I0103 04:48:32.984326 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65c55653-7592-4a07-bfc2-c6273437c99c-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.342458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" event={"ID":"65c55653-7592-4a07-bfc2-c6273437c99c","Type":"ContainerDied","Data":"12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f"} Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.342901 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12bf83e168b484f5fb8e231c1470fe06b9aba0ddebf7e177e889c900f8a7351f" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.342524 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.436005 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsx5q"] Jan 03 04:48:33 crc kubenswrapper[4865]: E0103 04:48:33.436650 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="extract-utilities" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.436737 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="extract-utilities" Jan 03 04:48:33 crc kubenswrapper[4865]: E0103 04:48:33.436822 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c55653-7592-4a07-bfc2-c6273437c99c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.436938 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c55653-7592-4a07-bfc2-c6273437c99c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:33 crc kubenswrapper[4865]: E0103 04:48:33.437020 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="extract-content" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.437092 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="extract-content" Jan 03 04:48:33 crc kubenswrapper[4865]: E0103 04:48:33.437160 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="registry-server" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.437237 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="registry-server" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.437533 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abefe02-7115-4355-a673-368d6b84ef80" containerName="registry-server" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.437669 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c55653-7592-4a07-bfc2-c6273437c99c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.438430 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.441216 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.441293 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.441443 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.448924 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.494015 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.494227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.494583 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnzf\" (UniqueName: \"kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.499640 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsx5q"] Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.596029 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnzf\" (UniqueName: \"kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.596093 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.596135 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.602022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.618034 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.628437 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnzf\" (UniqueName: \"kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf\") pod \"ssh-known-hosts-edpm-deployment-lsx5q\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:33 crc kubenswrapper[4865]: I0103 04:48:33.822074 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:34 crc kubenswrapper[4865]: I0103 04:48:34.155717 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:48:34 crc kubenswrapper[4865]: E0103 04:48:34.156233 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:34 crc kubenswrapper[4865]: I0103 04:48:34.377411 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lsx5q"] Jan 03 04:48:34 crc kubenswrapper[4865]: I0103 04:48:34.389766 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:48:35 crc kubenswrapper[4865]: I0103 04:48:35.360216 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" event={"ID":"4ae9c175-8601-467f-8f66-220277a0ffe1","Type":"ContainerStarted","Data":"0dacc0a10b0093b66f50650feacd31c98f1ab63de2cb1462f0734b8ccd87ab3d"} Jan 03 04:48:35 crc kubenswrapper[4865]: I0103 04:48:35.360884 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" event={"ID":"4ae9c175-8601-467f-8f66-220277a0ffe1","Type":"ContainerStarted","Data":"3bc6a4a496d1908de9aaa3269d25d1c90c5ace4349ac884f20de63265aaa3ad5"} Jan 03 04:48:35 crc kubenswrapper[4865]: I0103 04:48:35.386658 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" podStartSLOduration=1.9244429699999999 podStartE2EDuration="2.386634595s" podCreationTimestamp="2026-01-03 04:48:33 +0000 UTC" firstStartedPulling="2026-01-03 04:48:34.389468535 +0000 UTC m=+1941.506521730" lastFinishedPulling="2026-01-03 04:48:34.85166016 +0000 UTC m=+1941.968713355" observedRunningTime="2026-01-03 04:48:35.378349049 +0000 UTC m=+1942.495402254" watchObservedRunningTime="2026-01-03 04:48:35.386634595 +0000 UTC m=+1942.503687780" Jan 03 04:48:36 crc kubenswrapper[4865]: I0103 04:48:36.544509 4865 scope.go:117] "RemoveContainer" containerID="76b1e0861f69b912ea16e944a2124ef9f50fbdee27b35b1225e8182d9ae1c70b" Jan 03 04:48:42 crc kubenswrapper[4865]: I0103 04:48:42.425610 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ae9c175-8601-467f-8f66-220277a0ffe1" containerID="0dacc0a10b0093b66f50650feacd31c98f1ab63de2cb1462f0734b8ccd87ab3d" exitCode=0 Jan 03 04:48:42 crc kubenswrapper[4865]: I0103 04:48:42.425685 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" event={"ID":"4ae9c175-8601-467f-8f66-220277a0ffe1","Type":"ContainerDied","Data":"0dacc0a10b0093b66f50650feacd31c98f1ab63de2cb1462f0734b8ccd87ab3d"} Jan 03 04:48:43 crc kubenswrapper[4865]: I0103 04:48:43.914181 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.015556 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0\") pod \"4ae9c175-8601-467f-8f66-220277a0ffe1\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.015655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcnzf\" (UniqueName: \"kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf\") pod \"4ae9c175-8601-467f-8f66-220277a0ffe1\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.015813 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam\") pod \"4ae9c175-8601-467f-8f66-220277a0ffe1\" (UID: \"4ae9c175-8601-467f-8f66-220277a0ffe1\") " Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.022630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf" (OuterVolumeSpecName: "kube-api-access-mcnzf") pod "4ae9c175-8601-467f-8f66-220277a0ffe1" (UID: "4ae9c175-8601-467f-8f66-220277a0ffe1"). InnerVolumeSpecName "kube-api-access-mcnzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.052880 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4ae9c175-8601-467f-8f66-220277a0ffe1" (UID: "4ae9c175-8601-467f-8f66-220277a0ffe1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.064342 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ae9c175-8601-467f-8f66-220277a0ffe1" (UID: "4ae9c175-8601-467f-8f66-220277a0ffe1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.118041 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.118076 4865 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ae9c175-8601-467f-8f66-220277a0ffe1-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.118089 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcnzf\" (UniqueName: \"kubernetes.io/projected/4ae9c175-8601-467f-8f66-220277a0ffe1-kube-api-access-mcnzf\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.448761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" event={"ID":"4ae9c175-8601-467f-8f66-220277a0ffe1","Type":"ContainerDied","Data":"3bc6a4a496d1908de9aaa3269d25d1c90c5ace4349ac884f20de63265aaa3ad5"} Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.448830 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc6a4a496d1908de9aaa3269d25d1c90c5ace4349ac884f20de63265aaa3ad5" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.448883 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lsx5q" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.545174 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp"] Jan 03 04:48:44 crc kubenswrapper[4865]: E0103 04:48:44.546062 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae9c175-8601-467f-8f66-220277a0ffe1" containerName="ssh-known-hosts-edpm-deployment" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.546093 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae9c175-8601-467f-8f66-220277a0ffe1" containerName="ssh-known-hosts-edpm-deployment" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.546592 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae9c175-8601-467f-8f66-220277a0ffe1" containerName="ssh-known-hosts-edpm-deployment" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.548207 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.551557 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.552100 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.552206 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.552440 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.565271 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp"] Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.628211 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.628520 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clf5\" (UniqueName: \"kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.628712 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.731344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.731787 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clf5\" (UniqueName: \"kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.732071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.735848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.736016 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.752646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clf5\" (UniqueName: \"kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gm6fp\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:44 crc kubenswrapper[4865]: I0103 04:48:44.881839 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:45 crc kubenswrapper[4865]: I0103 04:48:45.489556 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp"] Jan 03 04:48:45 crc kubenswrapper[4865]: W0103 04:48:45.490782 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf635f6c7_e6b9_49f1_ba28_59fd66a1c425.slice/crio-d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917 WatchSource:0}: Error finding container d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917: Status 404 returned error can't find the container with id d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917 Jan 03 04:48:46 crc kubenswrapper[4865]: I0103 04:48:46.469575 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" event={"ID":"f635f6c7-e6b9-49f1-ba28-59fd66a1c425","Type":"ContainerStarted","Data":"d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917"} Jan 03 04:48:47 crc kubenswrapper[4865]: I0103 04:48:47.480697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" event={"ID":"f635f6c7-e6b9-49f1-ba28-59fd66a1c425","Type":"ContainerStarted","Data":"1eb7edc31814bf3ae4abe137f99859281989334db2357591ea5ed4198658e718"} Jan 03 04:48:47 crc kubenswrapper[4865]: I0103 04:48:47.506583 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" podStartSLOduration=2.752723456 podStartE2EDuration="3.506558789s" podCreationTimestamp="2026-01-03 04:48:44 +0000 UTC" firstStartedPulling="2026-01-03 04:48:45.493816107 +0000 UTC m=+1952.610869302" lastFinishedPulling="2026-01-03 04:48:46.2476514 +0000 UTC m=+1953.364704635" observedRunningTime="2026-01-03 04:48:47.499545237 +0000 UTC m=+1954.616598432" watchObservedRunningTime="2026-01-03 04:48:47.506558789 +0000 UTC m=+1954.623612004" Jan 03 04:48:48 crc kubenswrapper[4865]: I0103 04:48:48.156728 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:48:48 crc kubenswrapper[4865]: E0103 04:48:48.157879 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:55 crc kubenswrapper[4865]: I0103 04:48:55.557726 4865 generic.go:334] "Generic (PLEG): container finished" podID="f635f6c7-e6b9-49f1-ba28-59fd66a1c425" containerID="1eb7edc31814bf3ae4abe137f99859281989334db2357591ea5ed4198658e718" exitCode=0 Jan 03 04:48:55 crc kubenswrapper[4865]: I0103 04:48:55.557836 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" event={"ID":"f635f6c7-e6b9-49f1-ba28-59fd66a1c425","Type":"ContainerDied","Data":"1eb7edc31814bf3ae4abe137f99859281989334db2357591ea5ed4198658e718"} Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.000485 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.085081 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory\") pod \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.085311 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clf5\" (UniqueName: \"kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5\") pod \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.085454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key\") pod \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\" (UID: \"f635f6c7-e6b9-49f1-ba28-59fd66a1c425\") " Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.090742 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5" (OuterVolumeSpecName: "kube-api-access-5clf5") pod "f635f6c7-e6b9-49f1-ba28-59fd66a1c425" (UID: "f635f6c7-e6b9-49f1-ba28-59fd66a1c425"). InnerVolumeSpecName "kube-api-access-5clf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.108641 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory" (OuterVolumeSpecName: "inventory") pod "f635f6c7-e6b9-49f1-ba28-59fd66a1c425" (UID: "f635f6c7-e6b9-49f1-ba28-59fd66a1c425"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.123127 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f635f6c7-e6b9-49f1-ba28-59fd66a1c425" (UID: "f635f6c7-e6b9-49f1-ba28-59fd66a1c425"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.188039 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.188067 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clf5\" (UniqueName: \"kubernetes.io/projected/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-kube-api-access-5clf5\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.188077 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f635f6c7-e6b9-49f1-ba28-59fd66a1c425-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.581116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" event={"ID":"f635f6c7-e6b9-49f1-ba28-59fd66a1c425","Type":"ContainerDied","Data":"d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917"} Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.581156 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d796536f89add146685315d4cafb996a7c33cc97ad46d601269d7492f7cdb917" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.581196 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gm6fp" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.664498 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p"] Jan 03 04:48:57 crc kubenswrapper[4865]: E0103 04:48:57.664946 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f635f6c7-e6b9-49f1-ba28-59fd66a1c425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.664966 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f635f6c7-e6b9-49f1-ba28-59fd66a1c425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.665203 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f635f6c7-e6b9-49f1-ba28-59fd66a1c425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.667099 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.670060 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.670494 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.670759 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.671320 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.679781 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p"] Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.699652 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2qg\" (UniqueName: \"kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.700089 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.700226 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.801987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2qg\" (UniqueName: \"kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.802120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.802166 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.805749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.812939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.819604 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2qg\" (UniqueName: \"kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:57 crc kubenswrapper[4865]: I0103 04:48:57.985912 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:48:58 crc kubenswrapper[4865]: I0103 04:48:58.562296 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p"] Jan 03 04:48:58 crc kubenswrapper[4865]: I0103 04:48:58.595818 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" event={"ID":"f4adfc92-6b31-4832-9159-7ec2b85b018f","Type":"ContainerStarted","Data":"f562bd400abe6220f815e320befe579070b88e2f892e98c867e344a7562c92c1"} Jan 03 04:48:59 crc kubenswrapper[4865]: I0103 04:48:59.155468 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:48:59 crc kubenswrapper[4865]: E0103 04:48:59.155834 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:48:59 crc kubenswrapper[4865]: I0103 04:48:59.607957 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" event={"ID":"f4adfc92-6b31-4832-9159-7ec2b85b018f","Type":"ContainerStarted","Data":"98787b9f1e941834cc26a9e155607e888c3e68db0e3f67a2111c90f91845a625"} Jan 03 04:48:59 crc kubenswrapper[4865]: I0103 04:48:59.633492 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" podStartSLOduration=2.136403025 podStartE2EDuration="2.633469003s" podCreationTimestamp="2026-01-03 04:48:57 +0000 UTC" firstStartedPulling="2026-01-03 04:48:58.570515987 +0000 UTC m=+1965.687569172" lastFinishedPulling="2026-01-03 04:48:59.067581955 +0000 UTC m=+1966.184635150" observedRunningTime="2026-01-03 04:48:59.624276652 +0000 UTC m=+1966.741329837" watchObservedRunningTime="2026-01-03 04:48:59.633469003 +0000 UTC m=+1966.750522188" Jan 03 04:49:08 crc kubenswrapper[4865]: I0103 04:49:08.682962 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4adfc92-6b31-4832-9159-7ec2b85b018f" containerID="98787b9f1e941834cc26a9e155607e888c3e68db0e3f67a2111c90f91845a625" exitCode=0 Jan 03 04:49:08 crc kubenswrapper[4865]: I0103 04:49:08.683054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" event={"ID":"f4adfc92-6b31-4832-9159-7ec2b85b018f","Type":"ContainerDied","Data":"98787b9f1e941834cc26a9e155607e888c3e68db0e3f67a2111c90f91845a625"} Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.133094 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.155625 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:49:10 crc kubenswrapper[4865]: E0103 04:49:10.155997 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.242567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory\") pod \"f4adfc92-6b31-4832-9159-7ec2b85b018f\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.242622 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key\") pod \"f4adfc92-6b31-4832-9159-7ec2b85b018f\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.242684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg2qg\" (UniqueName: \"kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg\") pod \"f4adfc92-6b31-4832-9159-7ec2b85b018f\" (UID: \"f4adfc92-6b31-4832-9159-7ec2b85b018f\") " Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.249260 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg" (OuterVolumeSpecName: "kube-api-access-xg2qg") pod "f4adfc92-6b31-4832-9159-7ec2b85b018f" (UID: "f4adfc92-6b31-4832-9159-7ec2b85b018f"). InnerVolumeSpecName "kube-api-access-xg2qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.269994 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4adfc92-6b31-4832-9159-7ec2b85b018f" (UID: "f4adfc92-6b31-4832-9159-7ec2b85b018f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.273962 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory" (OuterVolumeSpecName: "inventory") pod "f4adfc92-6b31-4832-9159-7ec2b85b018f" (UID: "f4adfc92-6b31-4832-9159-7ec2b85b018f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.345384 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.345432 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4adfc92-6b31-4832-9159-7ec2b85b018f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.345446 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg2qg\" (UniqueName: \"kubernetes.io/projected/f4adfc92-6b31-4832-9159-7ec2b85b018f-kube-api-access-xg2qg\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.702605 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" event={"ID":"f4adfc92-6b31-4832-9159-7ec2b85b018f","Type":"ContainerDied","Data":"f562bd400abe6220f815e320befe579070b88e2f892e98c867e344a7562c92c1"} Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.702662 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f562bd400abe6220f815e320befe579070b88e2f892e98c867e344a7562c92c1" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.702726 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.815935 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x"] Jan 03 04:49:10 crc kubenswrapper[4865]: E0103 04:49:10.816350 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4adfc92-6b31-4832-9159-7ec2b85b018f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.816372 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4adfc92-6b31-4832-9159-7ec2b85b018f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.816617 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4adfc92-6b31-4832-9159-7ec2b85b018f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.817304 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.820944 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821071 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821087 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821445 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821538 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821591 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821707 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.821561 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.834523 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x"] Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855698 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855811 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.855978 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swtzb\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856244 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856296 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856442 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.856513 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.957908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swtzb\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.957987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958020 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958078 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958153 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958223 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958273 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958308 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958341 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.958407 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.963448 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.963503 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.964001 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.964459 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.964621 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.965457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.965718 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.965794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.965954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.967803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.968473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.969737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.970955 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:10 crc kubenswrapper[4865]: I0103 04:49:10.976961 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swtzb\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:11 crc kubenswrapper[4865]: I0103 04:49:11.142897 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:11 crc kubenswrapper[4865]: I0103 04:49:11.442416 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x"] Jan 03 04:49:11 crc kubenswrapper[4865]: I0103 04:49:11.712701 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" event={"ID":"c777a6c5-214d-40e9-b948-0e8d7a872578","Type":"ContainerStarted","Data":"cc2a80e348a9c3169bb0f9bbdd59aad7d1684268a0261bad68116bd2876a0dca"} Jan 03 04:49:12 crc kubenswrapper[4865]: I0103 04:49:12.722582 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" event={"ID":"c777a6c5-214d-40e9-b948-0e8d7a872578","Type":"ContainerStarted","Data":"777efffcde89158edc6d6de3aa79738bee59f0ec767e6be5ea0960976a939939"} Jan 03 04:49:12 crc kubenswrapper[4865]: I0103 04:49:12.751020 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" podStartSLOduration=2.065183674 podStartE2EDuration="2.750999749s" podCreationTimestamp="2026-01-03 04:49:10 +0000 UTC" firstStartedPulling="2026-01-03 04:49:11.443670366 +0000 UTC m=+1978.560723551" lastFinishedPulling="2026-01-03 04:49:12.129486441 +0000 UTC m=+1979.246539626" observedRunningTime="2026-01-03 04:49:12.742523087 +0000 UTC m=+1979.859576282" watchObservedRunningTime="2026-01-03 04:49:12.750999749 +0000 UTC m=+1979.868052944" Jan 03 04:49:21 crc kubenswrapper[4865]: I0103 04:49:21.155978 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:49:21 crc kubenswrapper[4865]: I0103 04:49:21.796450 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0"} Jan 03 04:49:52 crc kubenswrapper[4865]: I0103 04:49:52.063630 4865 generic.go:334] "Generic (PLEG): container finished" podID="c777a6c5-214d-40e9-b948-0e8d7a872578" containerID="777efffcde89158edc6d6de3aa79738bee59f0ec767e6be5ea0960976a939939" exitCode=0 Jan 03 04:49:52 crc kubenswrapper[4865]: I0103 04:49:52.063730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" event={"ID":"c777a6c5-214d-40e9-b948-0e8d7a872578","Type":"ContainerDied","Data":"777efffcde89158edc6d6de3aa79738bee59f0ec767e6be5ea0960976a939939"} Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.559562 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661503 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661588 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661631 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661708 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661751 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661826 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661864 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.661943 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.662025 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.662052 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swtzb\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.662127 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.662198 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.662227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle\") pod \"c777a6c5-214d-40e9-b948-0e8d7a872578\" (UID: \"c777a6c5-214d-40e9-b948-0e8d7a872578\") " Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.667489 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.668817 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.670431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.670879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.670922 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.670999 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.671196 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.671921 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.672628 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.675586 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.682558 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb" (OuterVolumeSpecName: "kube-api-access-swtzb") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "kube-api-access-swtzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.682761 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.698942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.718542 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory" (OuterVolumeSpecName: "inventory") pod "c777a6c5-214d-40e9-b948-0e8d7a872578" (UID: "c777a6c5-214d-40e9-b948-0e8d7a872578"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764682 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764734 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764751 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764765 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764780 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764793 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764808 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764822 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764834 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764846 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764858 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764870 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swtzb\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-kube-api-access-swtzb\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764883 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c777a6c5-214d-40e9-b948-0e8d7a872578-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:53 crc kubenswrapper[4865]: I0103 04:49:53.764896 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c777a6c5-214d-40e9-b948-0e8d7a872578-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.106140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" event={"ID":"c777a6c5-214d-40e9-b948-0e8d7a872578","Type":"ContainerDied","Data":"cc2a80e348a9c3169bb0f9bbdd59aad7d1684268a0261bad68116bd2876a0dca"} Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.106181 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2a80e348a9c3169bb0f9bbdd59aad7d1684268a0261bad68116bd2876a0dca" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.106188 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.242800 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k"] Jan 03 04:49:54 crc kubenswrapper[4865]: E0103 04:49:54.243492 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c777a6c5-214d-40e9-b948-0e8d7a872578" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.243519 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c777a6c5-214d-40e9-b948-0e8d7a872578" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.244565 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c777a6c5-214d-40e9-b948-0e8d7a872578" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.245246 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.247747 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.247934 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.249103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.249197 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.249213 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.263514 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k"] Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.376199 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.376630 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.376771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.376805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.376865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flpjn\" (UniqueName: \"kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.479185 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.479256 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.479320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flpjn\" (UniqueName: \"kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.479370 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.479440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.481590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.486033 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.486833 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.487969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.511752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flpjn\" (UniqueName: \"kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hc56k\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:54 crc kubenswrapper[4865]: I0103 04:49:54.568636 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:49:55 crc kubenswrapper[4865]: I0103 04:49:55.130417 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k"] Jan 03 04:49:56 crc kubenswrapper[4865]: I0103 04:49:56.131850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" event={"ID":"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11","Type":"ContainerStarted","Data":"f547c521f8200bb4de6f799ce28de22950ebceadc81bd771ec06b7bae4aa4a2a"} Jan 03 04:49:57 crc kubenswrapper[4865]: I0103 04:49:57.145305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" event={"ID":"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11","Type":"ContainerStarted","Data":"348a83d3044b14de6e4f1241a9704679165df6d8f0090ec03d12fec219dddcd8"} Jan 03 04:51:06 crc kubenswrapper[4865]: I0103 04:51:06.944194 4865 generic.go:334] "Generic (PLEG): container finished" podID="306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" containerID="348a83d3044b14de6e4f1241a9704679165df6d8f0090ec03d12fec219dddcd8" exitCode=0 Jan 03 04:51:06 crc kubenswrapper[4865]: I0103 04:51:06.944716 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" event={"ID":"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11","Type":"ContainerDied","Data":"348a83d3044b14de6e4f1241a9704679165df6d8f0090ec03d12fec219dddcd8"} Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.333622 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.513942 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle\") pod \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.514453 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key\") pod \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.514676 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0\") pod \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.514983 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory\") pod \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.515239 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flpjn\" (UniqueName: \"kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn\") pod \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\" (UID: \"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11\") " Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.520971 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn" (OuterVolumeSpecName: "kube-api-access-flpjn") pod "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" (UID: "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11"). InnerVolumeSpecName "kube-api-access-flpjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.527762 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" (UID: "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.546288 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" (UID: "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.552484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" (UID: "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.569304 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory" (OuterVolumeSpecName: "inventory") pod "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" (UID: "306b5e02-b107-4d9b-9d6e-66c1d4a5ed11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.618097 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.618141 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.618160 4865 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.618179 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.618197 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flpjn\" (UniqueName: \"kubernetes.io/projected/306b5e02-b107-4d9b-9d6e-66c1d4a5ed11-kube-api-access-flpjn\") on node \"crc\" DevicePath \"\"" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.964724 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" event={"ID":"306b5e02-b107-4d9b-9d6e-66c1d4a5ed11","Type":"ContainerDied","Data":"f547c521f8200bb4de6f799ce28de22950ebceadc81bd771ec06b7bae4aa4a2a"} Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.964764 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f547c521f8200bb4de6f799ce28de22950ebceadc81bd771ec06b7bae4aa4a2a" Jan 03 04:51:08 crc kubenswrapper[4865]: I0103 04:51:08.964892 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hc56k" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.080562 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd"] Jan 03 04:51:09 crc kubenswrapper[4865]: E0103 04:51:09.081345 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.081450 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.081762 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="306b5e02-b107-4d9b-9d6e-66c1d4a5ed11" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.082621 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.084832 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.088312 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.088312 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.088816 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.088870 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.089964 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd"] Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.091406 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229203 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229434 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229477 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.229498 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7t8c\" (UniqueName: \"kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.330732 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.331092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.331248 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.331360 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7t8c\" (UniqueName: \"kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.331567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.331673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.334927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.335121 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.336235 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.337599 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.338761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.356908 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7t8c\" (UniqueName: \"kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.403266 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.953856 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd"] Jan 03 04:51:09 crc kubenswrapper[4865]: I0103 04:51:09.974206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" event={"ID":"649b14b9-86dc-4aa5-9086-8a90038e573f","Type":"ContainerStarted","Data":"75670d294dfbbc0233fb8b7dedc71d2051ce34b0c376572f0f1379040448d630"} Jan 03 04:51:12 crc kubenswrapper[4865]: I0103 04:51:12.000565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" event={"ID":"649b14b9-86dc-4aa5-9086-8a90038e573f","Type":"ContainerStarted","Data":"ed2b16a7271679e335574201f575cf45aa89c6fdec9fe15d95f3aa29b46c35a7"} Jan 03 04:51:12 crc kubenswrapper[4865]: I0103 04:51:12.027413 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" podStartSLOduration=1.834003416 podStartE2EDuration="3.027375582s" podCreationTimestamp="2026-01-03 04:51:09 +0000 UTC" firstStartedPulling="2026-01-03 04:51:09.960734958 +0000 UTC m=+2097.077788153" lastFinishedPulling="2026-01-03 04:51:11.154107134 +0000 UTC m=+2098.271160319" observedRunningTime="2026-01-03 04:51:12.022003006 +0000 UTC m=+2099.139056191" watchObservedRunningTime="2026-01-03 04:51:12.027375582 +0000 UTC m=+2099.144428767" Jan 03 04:51:40 crc kubenswrapper[4865]: I0103 04:51:40.739904 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:51:40 crc kubenswrapper[4865]: I0103 04:51:40.740596 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:52:06 crc kubenswrapper[4865]: I0103 04:52:06.546493 4865 generic.go:334] "Generic (PLEG): container finished" podID="649b14b9-86dc-4aa5-9086-8a90038e573f" containerID="ed2b16a7271679e335574201f575cf45aa89c6fdec9fe15d95f3aa29b46c35a7" exitCode=0 Jan 03 04:52:06 crc kubenswrapper[4865]: I0103 04:52:06.546593 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" event={"ID":"649b14b9-86dc-4aa5-9086-8a90038e573f","Type":"ContainerDied","Data":"ed2b16a7271679e335574201f575cf45aa89c6fdec9fe15d95f3aa29b46c35a7"} Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.066520 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144291 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144372 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144904 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7t8c\" (UniqueName: \"kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.144972 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key\") pod \"649b14b9-86dc-4aa5-9086-8a90038e573f\" (UID: \"649b14b9-86dc-4aa5-9086-8a90038e573f\") " Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.150574 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.151703 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c" (OuterVolumeSpecName: "kube-api-access-n7t8c") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "kube-api-access-n7t8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.171625 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory" (OuterVolumeSpecName: "inventory") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.179591 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.186590 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.203729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "649b14b9-86dc-4aa5-9086-8a90038e573f" (UID: "649b14b9-86dc-4aa5-9086-8a90038e573f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248142 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7t8c\" (UniqueName: \"kubernetes.io/projected/649b14b9-86dc-4aa5-9086-8a90038e573f-kube-api-access-n7t8c\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248185 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248197 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248211 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248224 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.248237 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/649b14b9-86dc-4aa5-9086-8a90038e573f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.568884 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" event={"ID":"649b14b9-86dc-4aa5-9086-8a90038e573f","Type":"ContainerDied","Data":"75670d294dfbbc0233fb8b7dedc71d2051ce34b0c376572f0f1379040448d630"} Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.568946 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75670d294dfbbc0233fb8b7dedc71d2051ce34b0c376572f0f1379040448d630" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.568949 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.657798 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl"] Jan 03 04:52:08 crc kubenswrapper[4865]: E0103 04:52:08.658464 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649b14b9-86dc-4aa5-9086-8a90038e573f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.658583 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="649b14b9-86dc-4aa5-9086-8a90038e573f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.658910 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="649b14b9-86dc-4aa5-9086-8a90038e573f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.659813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.662930 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.663123 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.663443 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.663665 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.664065 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.679580 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl"] Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.756315 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.756406 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.756431 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.756474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9t5\" (UniqueName: \"kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.756496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.858046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.858372 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.858435 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.858526 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9t5\" (UniqueName: \"kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.858575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.863200 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.863374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.864523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.869661 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.888455 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9t5\" (UniqueName: \"kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:08 crc kubenswrapper[4865]: I0103 04:52:08.978541 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:52:09 crc kubenswrapper[4865]: I0103 04:52:09.571983 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl"] Jan 03 04:52:09 crc kubenswrapper[4865]: I0103 04:52:09.580715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" event={"ID":"11b63132-1f33-4f08-9ddd-b705cc52d950","Type":"ContainerStarted","Data":"1535ab77139163c00b3d29430f8eb42da341e6dde16cb60c3888fe469b891e87"} Jan 03 04:52:10 crc kubenswrapper[4865]: I0103 04:52:10.589747 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" event={"ID":"11b63132-1f33-4f08-9ddd-b705cc52d950","Type":"ContainerStarted","Data":"ac2a2a736919719fe3c3d6993e287b3f7ede72ba0ca0108469d5df80daf2f3a1"} Jan 03 04:52:10 crc kubenswrapper[4865]: I0103 04:52:10.612923 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" podStartSLOduration=2.16731405 podStartE2EDuration="2.612891038s" podCreationTimestamp="2026-01-03 04:52:08 +0000 UTC" firstStartedPulling="2026-01-03 04:52:09.569244502 +0000 UTC m=+2156.686297687" lastFinishedPulling="2026-01-03 04:52:10.01482145 +0000 UTC m=+2157.131874675" observedRunningTime="2026-01-03 04:52:10.609644849 +0000 UTC m=+2157.726698084" watchObservedRunningTime="2026-01-03 04:52:10.612891038 +0000 UTC m=+2157.729944303" Jan 03 04:52:10 crc kubenswrapper[4865]: I0103 04:52:10.740033 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:52:10 crc kubenswrapper[4865]: I0103 04:52:10.740126 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:52:40 crc kubenswrapper[4865]: I0103 04:52:40.739153 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:52:40 crc kubenswrapper[4865]: I0103 04:52:40.739974 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:52:40 crc kubenswrapper[4865]: I0103 04:52:40.740039 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:52:40 crc kubenswrapper[4865]: I0103 04:52:40.740966 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:52:40 crc kubenswrapper[4865]: I0103 04:52:40.741066 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0" gracePeriod=600 Jan 03 04:52:41 crc kubenswrapper[4865]: I0103 04:52:41.025831 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0" exitCode=0 Jan 03 04:52:41 crc kubenswrapper[4865]: I0103 04:52:41.025927 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0"} Jan 03 04:52:41 crc kubenswrapper[4865]: I0103 04:52:41.026272 4865 scope.go:117] "RemoveContainer" containerID="6f64ecb13fa9fa553140641323f718acfc7b577bab872d9e169ed9d2cb213d3e" Jan 03 04:52:42 crc kubenswrapper[4865]: I0103 04:52:42.038641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21"} Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.277759 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.280361 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.304284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.307582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68px7\" (UniqueName: \"kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.307705 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.307757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.410269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68px7\" (UniqueName: \"kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.410363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.410443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.411013 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.411321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.429867 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68px7\" (UniqueName: \"kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7\") pod \"community-operators-vmr5r\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:23 crc kubenswrapper[4865]: I0103 04:53:23.620905 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:24 crc kubenswrapper[4865]: I0103 04:53:24.153187 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:24 crc kubenswrapper[4865]: I0103 04:53:24.535352 4865 generic.go:334] "Generic (PLEG): container finished" podID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerID="b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a" exitCode=0 Jan 03 04:53:24 crc kubenswrapper[4865]: I0103 04:53:24.535427 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerDied","Data":"b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a"} Jan 03 04:53:24 crc kubenswrapper[4865]: I0103 04:53:24.535462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerStarted","Data":"70ff7292a28bf445519f649771e95cd0bdb2abddef08180bcec1e0351d59e178"} Jan 03 04:53:25 crc kubenswrapper[4865]: I0103 04:53:25.556968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerStarted","Data":"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2"} Jan 03 04:53:26 crc kubenswrapper[4865]: I0103 04:53:26.572156 4865 generic.go:334] "Generic (PLEG): container finished" podID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerID="3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2" exitCode=0 Jan 03 04:53:26 crc kubenswrapper[4865]: I0103 04:53:26.572208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerDied","Data":"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2"} Jan 03 04:53:27 crc kubenswrapper[4865]: I0103 04:53:27.587646 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerStarted","Data":"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33"} Jan 03 04:53:27 crc kubenswrapper[4865]: I0103 04:53:27.607234 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmr5r" podStartSLOduration=2.169738833 podStartE2EDuration="4.607217019s" podCreationTimestamp="2026-01-03 04:53:23 +0000 UTC" firstStartedPulling="2026-01-03 04:53:24.537962873 +0000 UTC m=+2231.655016058" lastFinishedPulling="2026-01-03 04:53:26.975441019 +0000 UTC m=+2234.092494244" observedRunningTime="2026-01-03 04:53:27.604879265 +0000 UTC m=+2234.721932460" watchObservedRunningTime="2026-01-03 04:53:27.607217019 +0000 UTC m=+2234.724270204" Jan 03 04:53:33 crc kubenswrapper[4865]: I0103 04:53:33.621344 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:33 crc kubenswrapper[4865]: I0103 04:53:33.621977 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:33 crc kubenswrapper[4865]: I0103 04:53:33.675625 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:33 crc kubenswrapper[4865]: I0103 04:53:33.740299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:33 crc kubenswrapper[4865]: I0103 04:53:33.926453 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:35 crc kubenswrapper[4865]: I0103 04:53:35.662328 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmr5r" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="registry-server" containerID="cri-o://b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33" gracePeriod=2 Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.072196 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.178873 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68px7\" (UniqueName: \"kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7\") pod \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.178957 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities\") pod \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.178988 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content\") pod \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\" (UID: \"ea49e23f-d2bf-4d27-88ff-daeb246eb25a\") " Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.180350 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities" (OuterVolumeSpecName: "utilities") pod "ea49e23f-d2bf-4d27-88ff-daeb246eb25a" (UID: "ea49e23f-d2bf-4d27-88ff-daeb246eb25a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.184762 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7" (OuterVolumeSpecName: "kube-api-access-68px7") pod "ea49e23f-d2bf-4d27-88ff-daeb246eb25a" (UID: "ea49e23f-d2bf-4d27-88ff-daeb246eb25a"). InnerVolumeSpecName "kube-api-access-68px7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.229617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea49e23f-d2bf-4d27-88ff-daeb246eb25a" (UID: "ea49e23f-d2bf-4d27-88ff-daeb246eb25a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.281249 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68px7\" (UniqueName: \"kubernetes.io/projected/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-kube-api-access-68px7\") on node \"crc\" DevicePath \"\"" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.281283 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.281292 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea49e23f-d2bf-4d27-88ff-daeb246eb25a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.674963 4865 generic.go:334] "Generic (PLEG): container finished" podID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerID="b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33" exitCode=0 Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.675011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerDied","Data":"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33"} Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.675037 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr5r" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.675058 4865 scope.go:117] "RemoveContainer" containerID="b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.675044 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr5r" event={"ID":"ea49e23f-d2bf-4d27-88ff-daeb246eb25a","Type":"ContainerDied","Data":"70ff7292a28bf445519f649771e95cd0bdb2abddef08180bcec1e0351d59e178"} Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.731177 4865 scope.go:117] "RemoveContainer" containerID="3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.733560 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.746293 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmr5r"] Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.768704 4865 scope.go:117] "RemoveContainer" containerID="b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.808783 4865 scope.go:117] "RemoveContainer" containerID="b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33" Jan 03 04:53:36 crc kubenswrapper[4865]: E0103 04:53:36.809481 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33\": container with ID starting with b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33 not found: ID does not exist" containerID="b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.809534 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33"} err="failed to get container status \"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33\": rpc error: code = NotFound desc = could not find container \"b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33\": container with ID starting with b0f38bd80847a72d6f9789ce9c9c3dee6b23e0235a06086583bae10c47b3fb33 not found: ID does not exist" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.809567 4865 scope.go:117] "RemoveContainer" containerID="3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2" Jan 03 04:53:36 crc kubenswrapper[4865]: E0103 04:53:36.809966 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2\": container with ID starting with 3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2 not found: ID does not exist" containerID="3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.810011 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2"} err="failed to get container status \"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2\": rpc error: code = NotFound desc = could not find container \"3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2\": container with ID starting with 3fe4114c886164721b410f7ccabcb5494e9deb0e8d037a7bcec5a2dabb669fa2 not found: ID does not exist" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.810036 4865 scope.go:117] "RemoveContainer" containerID="b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a" Jan 03 04:53:36 crc kubenswrapper[4865]: E0103 04:53:36.810523 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a\": container with ID starting with b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a not found: ID does not exist" containerID="b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a" Jan 03 04:53:36 crc kubenswrapper[4865]: I0103 04:53:36.810564 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a"} err="failed to get container status \"b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a\": rpc error: code = NotFound desc = could not find container \"b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a\": container with ID starting with b5cbb597118c10121d4d272a147fdcf40614e8fa51b9576380668ac0ba3f424a not found: ID does not exist" Jan 03 04:53:37 crc kubenswrapper[4865]: I0103 04:53:37.170746 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" path="/var/lib/kubelet/pods/ea49e23f-d2bf-4d27-88ff-daeb246eb25a/volumes" Jan 03 04:55:10 crc kubenswrapper[4865]: I0103 04:55:10.739306 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:55:10 crc kubenswrapper[4865]: I0103 04:55:10.740302 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:55:40 crc kubenswrapper[4865]: I0103 04:55:40.739475 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:55:40 crc kubenswrapper[4865]: I0103 04:55:40.741090 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.822283 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:00 crc kubenswrapper[4865]: E0103 04:56:00.823258 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="registry-server" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.823271 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="registry-server" Jan 03 04:56:00 crc kubenswrapper[4865]: E0103 04:56:00.823299 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="extract-utilities" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.823306 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="extract-utilities" Jan 03 04:56:00 crc kubenswrapper[4865]: E0103 04:56:00.823321 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="extract-content" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.823327 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="extract-content" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.823532 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea49e23f-d2bf-4d27-88ff-daeb246eb25a" containerName="registry-server" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.824733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.894140 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.967504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.967610 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqq7f\" (UniqueName: \"kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:00 crc kubenswrapper[4865]: I0103 04:56:00.967652 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.069292 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.069414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqq7f\" (UniqueName: \"kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.069451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.069883 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.069933 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.092807 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqq7f\" (UniqueName: \"kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f\") pod \"redhat-marketplace-mrdbf\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.165844 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.417456 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.419499 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.432901 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.580681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.581219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.581451 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2sqs\" (UniqueName: \"kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.622079 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.683142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.683211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2sqs\" (UniqueName: \"kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.683249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.684101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.684334 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.706564 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2sqs\" (UniqueName: \"kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs\") pod \"certified-operators-bf77c\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:01 crc kubenswrapper[4865]: I0103 04:56:01.739646 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:02 crc kubenswrapper[4865]: I0103 04:56:02.215732 4865 generic.go:334] "Generic (PLEG): container finished" podID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerID="fcfc83e419f20daa7af3783ca3c9f82e657924101f98f57b6601e16fe0a04626" exitCode=0 Jan 03 04:56:02 crc kubenswrapper[4865]: I0103 04:56:02.215830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerDied","Data":"fcfc83e419f20daa7af3783ca3c9f82e657924101f98f57b6601e16fe0a04626"} Jan 03 04:56:02 crc kubenswrapper[4865]: I0103 04:56:02.216080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerStarted","Data":"217ff910378919a0dedf9c8b22f20b22f5b7fb1ed2105961c7777a9d772825a0"} Jan 03 04:56:02 crc kubenswrapper[4865]: I0103 04:56:02.218435 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 04:56:02 crc kubenswrapper[4865]: I0103 04:56:02.257608 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:03 crc kubenswrapper[4865]: I0103 04:56:03.225782 4865 generic.go:334] "Generic (PLEG): container finished" podID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerID="2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba" exitCode=0 Jan 03 04:56:03 crc kubenswrapper[4865]: I0103 04:56:03.225899 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerDied","Data":"2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba"} Jan 03 04:56:03 crc kubenswrapper[4865]: I0103 04:56:03.226226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerStarted","Data":"4a06481ded28363fe927c06862763fb6f8ac94d5b139340b4f1a399a97405dcc"} Jan 03 04:56:03 crc kubenswrapper[4865]: I0103 04:56:03.228021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerStarted","Data":"5c60a68be79ff4743dda0a244483066c30aad9c642129814cf8915ec4a5d14d0"} Jan 03 04:56:04 crc kubenswrapper[4865]: I0103 04:56:04.238143 4865 generic.go:334] "Generic (PLEG): container finished" podID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerID="5c60a68be79ff4743dda0a244483066c30aad9c642129814cf8915ec4a5d14d0" exitCode=0 Jan 03 04:56:04 crc kubenswrapper[4865]: I0103 04:56:04.238182 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerDied","Data":"5c60a68be79ff4743dda0a244483066c30aad9c642129814cf8915ec4a5d14d0"} Jan 03 04:56:04 crc kubenswrapper[4865]: I0103 04:56:04.241238 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerStarted","Data":"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4"} Jan 03 04:56:05 crc kubenswrapper[4865]: I0103 04:56:05.254630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerStarted","Data":"e1967fa4f080c8d8288d3012e7d05f60d5e62cd569570f41aab292b11b0051fa"} Jan 03 04:56:05 crc kubenswrapper[4865]: I0103 04:56:05.269732 4865 generic.go:334] "Generic (PLEG): container finished" podID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerID="9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4" exitCode=0 Jan 03 04:56:05 crc kubenswrapper[4865]: I0103 04:56:05.269809 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerDied","Data":"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4"} Jan 03 04:56:05 crc kubenswrapper[4865]: I0103 04:56:05.301575 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mrdbf" podStartSLOduration=2.670924875 podStartE2EDuration="5.301555928s" podCreationTimestamp="2026-01-03 04:56:00 +0000 UTC" firstStartedPulling="2026-01-03 04:56:02.218171 +0000 UTC m=+2389.335224185" lastFinishedPulling="2026-01-03 04:56:04.848802053 +0000 UTC m=+2391.965855238" observedRunningTime="2026-01-03 04:56:05.295869082 +0000 UTC m=+2392.412922277" watchObservedRunningTime="2026-01-03 04:56:05.301555928 +0000 UTC m=+2392.418609113" Jan 03 04:56:06 crc kubenswrapper[4865]: I0103 04:56:06.294946 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerStarted","Data":"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6"} Jan 03 04:56:06 crc kubenswrapper[4865]: I0103 04:56:06.314003 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bf77c" podStartSLOduration=2.713075951 podStartE2EDuration="5.313987374s" podCreationTimestamp="2026-01-03 04:56:01 +0000 UTC" firstStartedPulling="2026-01-03 04:56:03.227871032 +0000 UTC m=+2390.344924217" lastFinishedPulling="2026-01-03 04:56:05.828782435 +0000 UTC m=+2392.945835640" observedRunningTime="2026-01-03 04:56:06.311966979 +0000 UTC m=+2393.429020184" watchObservedRunningTime="2026-01-03 04:56:06.313987374 +0000 UTC m=+2393.431040559" Jan 03 04:56:10 crc kubenswrapper[4865]: I0103 04:56:10.739689 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 04:56:10 crc kubenswrapper[4865]: I0103 04:56:10.740197 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 04:56:10 crc kubenswrapper[4865]: I0103 04:56:10.740282 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 04:56:10 crc kubenswrapper[4865]: I0103 04:56:10.741624 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 04:56:10 crc kubenswrapper[4865]: I0103 04:56:10.741744 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" gracePeriod=600 Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.170371 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.170486 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.225059 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.383949 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.467238 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.740644 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.740729 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:11 crc kubenswrapper[4865]: I0103 04:56:11.798704 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:11 crc kubenswrapper[4865]: E0103 04:56:11.984941 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:56:12 crc kubenswrapper[4865]: I0103 04:56:12.355971 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" exitCode=0 Jan 03 04:56:12 crc kubenswrapper[4865]: I0103 04:56:12.356060 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21"} Jan 03 04:56:12 crc kubenswrapper[4865]: I0103 04:56:12.356145 4865 scope.go:117] "RemoveContainer" containerID="59295a62d7927d7c1994c3717d9a09dd59ab4eba318cf0d72d452c45ed7c1df0" Jan 03 04:56:12 crc kubenswrapper[4865]: I0103 04:56:12.357780 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:56:12 crc kubenswrapper[4865]: E0103 04:56:12.358370 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:56:12 crc kubenswrapper[4865]: I0103 04:56:12.456905 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:13 crc kubenswrapper[4865]: I0103 04:56:13.366729 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mrdbf" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="registry-server" containerID="cri-o://e1967fa4f080c8d8288d3012e7d05f60d5e62cd569570f41aab292b11b0051fa" gracePeriod=2 Jan 03 04:56:13 crc kubenswrapper[4865]: I0103 04:56:13.873145 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.378267 4865 generic.go:334] "Generic (PLEG): container finished" podID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerID="e1967fa4f080c8d8288d3012e7d05f60d5e62cd569570f41aab292b11b0051fa" exitCode=0 Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.378357 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.378420 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerDied","Data":"e1967fa4f080c8d8288d3012e7d05f60d5e62cd569570f41aab292b11b0051fa"} Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.378476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrdbf" event={"ID":"61649372-ed74-4130-aef9-e69f63e3fb0e","Type":"ContainerDied","Data":"217ff910378919a0dedf9c8b22f20b22f5b7fb1ed2105961c7777a9d772825a0"} Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.378498 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217ff910378919a0dedf9c8b22f20b22f5b7fb1ed2105961c7777a9d772825a0" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.529029 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities\") pod \"61649372-ed74-4130-aef9-e69f63e3fb0e\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.529103 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqq7f\" (UniqueName: \"kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f\") pod \"61649372-ed74-4130-aef9-e69f63e3fb0e\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.529207 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content\") pod \"61649372-ed74-4130-aef9-e69f63e3fb0e\" (UID: \"61649372-ed74-4130-aef9-e69f63e3fb0e\") " Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.529950 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities" (OuterVolumeSpecName: "utilities") pod "61649372-ed74-4130-aef9-e69f63e3fb0e" (UID: "61649372-ed74-4130-aef9-e69f63e3fb0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.535435 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f" (OuterVolumeSpecName: "kube-api-access-cqq7f") pod "61649372-ed74-4130-aef9-e69f63e3fb0e" (UID: "61649372-ed74-4130-aef9-e69f63e3fb0e"). InnerVolumeSpecName "kube-api-access-cqq7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.573077 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61649372-ed74-4130-aef9-e69f63e3fb0e" (UID: "61649372-ed74-4130-aef9-e69f63e3fb0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.632565 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.632631 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61649372-ed74-4130-aef9-e69f63e3fb0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:14 crc kubenswrapper[4865]: I0103 04:56:14.632662 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqq7f\" (UniqueName: \"kubernetes.io/projected/61649372-ed74-4130-aef9-e69f63e3fb0e-kube-api-access-cqq7f\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.388363 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrdbf" Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.388457 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bf77c" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="registry-server" containerID="cri-o://e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6" gracePeriod=2 Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.422452 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.438419 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrdbf"] Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.804322 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.954342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content\") pod \"b9bebab1-adf4-4d04-8409-47bd432ede9e\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.954544 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities\") pod \"b9bebab1-adf4-4d04-8409-47bd432ede9e\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.954640 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2sqs\" (UniqueName: \"kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs\") pod \"b9bebab1-adf4-4d04-8409-47bd432ede9e\" (UID: \"b9bebab1-adf4-4d04-8409-47bd432ede9e\") " Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.955725 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities" (OuterVolumeSpecName: "utilities") pod "b9bebab1-adf4-4d04-8409-47bd432ede9e" (UID: "b9bebab1-adf4-4d04-8409-47bd432ede9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:56:15 crc kubenswrapper[4865]: I0103 04:56:15.961274 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs" (OuterVolumeSpecName: "kube-api-access-g2sqs") pod "b9bebab1-adf4-4d04-8409-47bd432ede9e" (UID: "b9bebab1-adf4-4d04-8409-47bd432ede9e"). InnerVolumeSpecName "kube-api-access-g2sqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.025612 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bebab1-adf4-4d04-8409-47bd432ede9e" (UID: "b9bebab1-adf4-4d04-8409-47bd432ede9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.056418 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.056456 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bebab1-adf4-4d04-8409-47bd432ede9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.056467 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2sqs\" (UniqueName: \"kubernetes.io/projected/b9bebab1-adf4-4d04-8409-47bd432ede9e-kube-api-access-g2sqs\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.406956 4865 generic.go:334] "Generic (PLEG): container finished" podID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerID="e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6" exitCode=0 Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.407019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerDied","Data":"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6"} Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.407041 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf77c" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.407063 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf77c" event={"ID":"b9bebab1-adf4-4d04-8409-47bd432ede9e","Type":"ContainerDied","Data":"4a06481ded28363fe927c06862763fb6f8ac94d5b139340b4f1a399a97405dcc"} Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.407093 4865 scope.go:117] "RemoveContainer" containerID="e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.446756 4865 scope.go:117] "RemoveContainer" containerID="9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.482705 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.500819 4865 scope.go:117] "RemoveContainer" containerID="2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.501806 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bf77c"] Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.552492 4865 scope.go:117] "RemoveContainer" containerID="e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6" Jan 03 04:56:16 crc kubenswrapper[4865]: E0103 04:56:16.553357 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6\": container with ID starting with e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6 not found: ID does not exist" containerID="e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.553687 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6"} err="failed to get container status \"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6\": rpc error: code = NotFound desc = could not find container \"e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6\": container with ID starting with e3ee35ad8a2d1e11a004b8dad49c9ba4c61c499772a94320e82e8ded305c61b6 not found: ID does not exist" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.553819 4865 scope.go:117] "RemoveContainer" containerID="9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4" Jan 03 04:56:16 crc kubenswrapper[4865]: E0103 04:56:16.554507 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4\": container with ID starting with 9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4 not found: ID does not exist" containerID="9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.554542 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4"} err="failed to get container status \"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4\": rpc error: code = NotFound desc = could not find container \"9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4\": container with ID starting with 9b1fa11ccfa06ba9821c4273c1a0dfd2180e584905e1f41fa54a515b077de6b4 not found: ID does not exist" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.554567 4865 scope.go:117] "RemoveContainer" containerID="2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba" Jan 03 04:56:16 crc kubenswrapper[4865]: E0103 04:56:16.554862 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba\": container with ID starting with 2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba not found: ID does not exist" containerID="2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba" Jan 03 04:56:16 crc kubenswrapper[4865]: I0103 04:56:16.554908 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba"} err="failed to get container status \"2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba\": rpc error: code = NotFound desc = could not find container \"2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba\": container with ID starting with 2a9fcc1d72a1fe27b7ea8aae07eb83f1a016ec844a089b996bb9ebf3e634e9ba not found: ID does not exist" Jan 03 04:56:17 crc kubenswrapper[4865]: I0103 04:56:17.173203 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" path="/var/lib/kubelet/pods/61649372-ed74-4130-aef9-e69f63e3fb0e/volumes" Jan 03 04:56:17 crc kubenswrapper[4865]: I0103 04:56:17.174645 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" path="/var/lib/kubelet/pods/b9bebab1-adf4-4d04-8409-47bd432ede9e/volumes" Jan 03 04:56:24 crc kubenswrapper[4865]: I0103 04:56:24.156592 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:56:24 crc kubenswrapper[4865]: E0103 04:56:24.157484 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:56:39 crc kubenswrapper[4865]: I0103 04:56:39.156679 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:56:39 crc kubenswrapper[4865]: E0103 04:56:39.159069 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:56:41 crc kubenswrapper[4865]: I0103 04:56:41.690137 4865 generic.go:334] "Generic (PLEG): container finished" podID="11b63132-1f33-4f08-9ddd-b705cc52d950" containerID="ac2a2a736919719fe3c3d6993e287b3f7ede72ba0ca0108469d5df80daf2f3a1" exitCode=0 Jan 03 04:56:41 crc kubenswrapper[4865]: I0103 04:56:41.690297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" event={"ID":"11b63132-1f33-4f08-9ddd-b705cc52d950","Type":"ContainerDied","Data":"ac2a2a736919719fe3c3d6993e287b3f7ede72ba0ca0108469d5df80daf2f3a1"} Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.163848 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.336270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0\") pod \"11b63132-1f33-4f08-9ddd-b705cc52d950\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.336348 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key\") pod \"11b63132-1f33-4f08-9ddd-b705cc52d950\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.336514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9t5\" (UniqueName: \"kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5\") pod \"11b63132-1f33-4f08-9ddd-b705cc52d950\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.336552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory\") pod \"11b63132-1f33-4f08-9ddd-b705cc52d950\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.336598 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle\") pod \"11b63132-1f33-4f08-9ddd-b705cc52d950\" (UID: \"11b63132-1f33-4f08-9ddd-b705cc52d950\") " Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.342084 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "11b63132-1f33-4f08-9ddd-b705cc52d950" (UID: "11b63132-1f33-4f08-9ddd-b705cc52d950"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.344631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5" (OuterVolumeSpecName: "kube-api-access-zq9t5") pod "11b63132-1f33-4f08-9ddd-b705cc52d950" (UID: "11b63132-1f33-4f08-9ddd-b705cc52d950"). InnerVolumeSpecName "kube-api-access-zq9t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.364171 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "11b63132-1f33-4f08-9ddd-b705cc52d950" (UID: "11b63132-1f33-4f08-9ddd-b705cc52d950"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.368795 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "11b63132-1f33-4f08-9ddd-b705cc52d950" (UID: "11b63132-1f33-4f08-9ddd-b705cc52d950"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.372535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory" (OuterVolumeSpecName: "inventory") pod "11b63132-1f33-4f08-9ddd-b705cc52d950" (UID: "11b63132-1f33-4f08-9ddd-b705cc52d950"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.439053 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.439089 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9t5\" (UniqueName: \"kubernetes.io/projected/11b63132-1f33-4f08-9ddd-b705cc52d950-kube-api-access-zq9t5\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.439101 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.439111 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.439122 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/11b63132-1f33-4f08-9ddd-b705cc52d950-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.712503 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" event={"ID":"11b63132-1f33-4f08-9ddd-b705cc52d950","Type":"ContainerDied","Data":"1535ab77139163c00b3d29430f8eb42da341e6dde16cb60c3888fe469b891e87"} Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.712552 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1535ab77139163c00b3d29430f8eb42da341e6dde16cb60c3888fe469b891e87" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.712972 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.806077 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x"] Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.806898 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b63132-1f33-4f08-9ddd-b705cc52d950" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.806934 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b63132-1f33-4f08-9ddd-b705cc52d950" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.806966 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.806978 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.807024 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807034 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.807050 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="extract-content" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807058 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="extract-content" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.807080 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="extract-utilities" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807093 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="extract-utilities" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.807114 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="extract-utilities" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807126 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="extract-utilities" Jan 03 04:56:43 crc kubenswrapper[4865]: E0103 04:56:43.807144 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="extract-content" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807152 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="extract-content" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807441 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bebab1-adf4-4d04-8409-47bd432ede9e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807468 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b63132-1f33-4f08-9ddd-b705cc52d950" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.807485 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="61649372-ed74-4130-aef9-e69f63e3fb0e" containerName="registry-server" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.808692 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.814129 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.818184 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x"] Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.818766 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.818929 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.819051 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.819127 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.819176 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.819238 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950554 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvjhm\" (UniqueName: \"kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950608 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950631 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950701 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950747 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950835 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950892 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:43 crc kubenswrapper[4865]: I0103 04:56:43.950930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.052461 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.052524 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.052559 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.052586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.052647 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.053192 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.053256 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvjhm\" (UniqueName: \"kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.053301 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.053318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.054492 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.057494 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.057935 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.058116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.058490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.059422 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.064927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.066316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.084815 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvjhm\" (UniqueName: \"kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7kv5x\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.126173 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.520030 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x"] Jan 03 04:56:44 crc kubenswrapper[4865]: I0103 04:56:44.723990 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" event={"ID":"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab","Type":"ContainerStarted","Data":"b7da3f6b6b8feee36b562dc882c45eadaaf1734b7e850f9fea4e8bb521c51b1d"} Jan 03 04:56:46 crc kubenswrapper[4865]: I0103 04:56:46.758963 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" event={"ID":"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab","Type":"ContainerStarted","Data":"918a86b3017ccd0dd0bf39f88d197f06fbbd71725e2184f0807cad2b7b5cb485"} Jan 03 04:56:46 crc kubenswrapper[4865]: I0103 04:56:46.787347 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" podStartSLOduration=2.902264829 podStartE2EDuration="3.78732648s" podCreationTimestamp="2026-01-03 04:56:43 +0000 UTC" firstStartedPulling="2026-01-03 04:56:44.553034272 +0000 UTC m=+2431.670087457" lastFinishedPulling="2026-01-03 04:56:45.438095913 +0000 UTC m=+2432.555149108" observedRunningTime="2026-01-03 04:56:46.780004369 +0000 UTC m=+2433.897057584" watchObservedRunningTime="2026-01-03 04:56:46.78732648 +0000 UTC m=+2433.904379685" Jan 03 04:56:52 crc kubenswrapper[4865]: I0103 04:56:52.156008 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:56:52 crc kubenswrapper[4865]: E0103 04:56:52.157325 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:57:05 crc kubenswrapper[4865]: I0103 04:57:05.156503 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:57:05 crc kubenswrapper[4865]: E0103 04:57:05.157223 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:57:19 crc kubenswrapper[4865]: I0103 04:57:19.156983 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:57:19 crc kubenswrapper[4865]: E0103 04:57:19.158357 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:57:33 crc kubenswrapper[4865]: I0103 04:57:33.164469 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:57:33 crc kubenswrapper[4865]: E0103 04:57:33.165637 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:57:48 crc kubenswrapper[4865]: I0103 04:57:48.156915 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:57:48 crc kubenswrapper[4865]: E0103 04:57:48.158184 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:58:00 crc kubenswrapper[4865]: I0103 04:58:00.156026 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:58:00 crc kubenswrapper[4865]: E0103 04:58:00.157221 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.816101 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.818655 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.855362 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.914548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.914681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:07 crc kubenswrapper[4865]: I0103 04:58:07.914714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqszz\" (UniqueName: \"kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.016066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.016213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.016246 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqszz\" (UniqueName: \"kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.016550 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.016588 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.039221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqszz\" (UniqueName: \"kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz\") pod \"redhat-operators-z2zzs\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.165132 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:08 crc kubenswrapper[4865]: I0103 04:58:08.633104 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:08 crc kubenswrapper[4865]: W0103 04:58:08.639567 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod118f575d_3379_42eb_95e5_5a51903bb5fa.slice/crio-55a89a7ea2c607f17553abe8b173e1e9bacf78c923e5c787c649802c8e8d59ca WatchSource:0}: Error finding container 55a89a7ea2c607f17553abe8b173e1e9bacf78c923e5c787c649802c8e8d59ca: Status 404 returned error can't find the container with id 55a89a7ea2c607f17553abe8b173e1e9bacf78c923e5c787c649802c8e8d59ca Jan 03 04:58:09 crc kubenswrapper[4865]: I0103 04:58:09.587888 4865 generic.go:334] "Generic (PLEG): container finished" podID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerID="e8f2744d940bfa3511c14bb4f35c85b68be58c0889b30eb3dded089aad41ad1e" exitCode=0 Jan 03 04:58:09 crc kubenswrapper[4865]: I0103 04:58:09.587940 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerDied","Data":"e8f2744d940bfa3511c14bb4f35c85b68be58c0889b30eb3dded089aad41ad1e"} Jan 03 04:58:09 crc kubenswrapper[4865]: I0103 04:58:09.588213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerStarted","Data":"55a89a7ea2c607f17553abe8b173e1e9bacf78c923e5c787c649802c8e8d59ca"} Jan 03 04:58:10 crc kubenswrapper[4865]: I0103 04:58:10.596019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerStarted","Data":"42d090ef5639b532ab461a566a365e8809c139ff6e4d96d4854a7d83fa0551bb"} Jan 03 04:58:11 crc kubenswrapper[4865]: I0103 04:58:11.609615 4865 generic.go:334] "Generic (PLEG): container finished" podID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerID="42d090ef5639b532ab461a566a365e8809c139ff6e4d96d4854a7d83fa0551bb" exitCode=0 Jan 03 04:58:11 crc kubenswrapper[4865]: I0103 04:58:11.609700 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerDied","Data":"42d090ef5639b532ab461a566a365e8809c139ff6e4d96d4854a7d83fa0551bb"} Jan 03 04:58:12 crc kubenswrapper[4865]: I0103 04:58:12.621688 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerStarted","Data":"158c67602a9afbb427d444b6ef5b6f8287e7e69c468999e02d8a98a2670fbfc0"} Jan 03 04:58:15 crc kubenswrapper[4865]: I0103 04:58:15.156777 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:58:15 crc kubenswrapper[4865]: E0103 04:58:15.157201 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.165534 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.166210 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.250921 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.300603 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z2zzs" podStartSLOduration=8.848003592 podStartE2EDuration="11.300573626s" podCreationTimestamp="2026-01-03 04:58:07 +0000 UTC" firstStartedPulling="2026-01-03 04:58:09.590184561 +0000 UTC m=+2516.707237756" lastFinishedPulling="2026-01-03 04:58:12.042754605 +0000 UTC m=+2519.159807790" observedRunningTime="2026-01-03 04:58:12.644687651 +0000 UTC m=+2519.761740836" watchObservedRunningTime="2026-01-03 04:58:18.300573626 +0000 UTC m=+2525.417626851" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.753499 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:18 crc kubenswrapper[4865]: I0103 04:58:18.809153 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:20 crc kubenswrapper[4865]: I0103 04:58:20.713114 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z2zzs" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="registry-server" containerID="cri-o://158c67602a9afbb427d444b6ef5b6f8287e7e69c468999e02d8a98a2670fbfc0" gracePeriod=2 Jan 03 04:58:21 crc kubenswrapper[4865]: I0103 04:58:21.725143 4865 generic.go:334] "Generic (PLEG): container finished" podID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerID="158c67602a9afbb427d444b6ef5b6f8287e7e69c468999e02d8a98a2670fbfc0" exitCode=0 Jan 03 04:58:21 crc kubenswrapper[4865]: I0103 04:58:21.725354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerDied","Data":"158c67602a9afbb427d444b6ef5b6f8287e7e69c468999e02d8a98a2670fbfc0"} Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.283253 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.344593 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqszz\" (UniqueName: \"kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz\") pod \"118f575d-3379-42eb-95e5-5a51903bb5fa\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.344730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content\") pod \"118f575d-3379-42eb-95e5-5a51903bb5fa\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.344951 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities\") pod \"118f575d-3379-42eb-95e5-5a51903bb5fa\" (UID: \"118f575d-3379-42eb-95e5-5a51903bb5fa\") " Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.346133 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities" (OuterVolumeSpecName: "utilities") pod "118f575d-3379-42eb-95e5-5a51903bb5fa" (UID: "118f575d-3379-42eb-95e5-5a51903bb5fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.368073 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz" (OuterVolumeSpecName: "kube-api-access-mqszz") pod "118f575d-3379-42eb-95e5-5a51903bb5fa" (UID: "118f575d-3379-42eb-95e5-5a51903bb5fa"). InnerVolumeSpecName "kube-api-access-mqszz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.447024 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.447061 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqszz\" (UniqueName: \"kubernetes.io/projected/118f575d-3379-42eb-95e5-5a51903bb5fa-kube-api-access-mqszz\") on node \"crc\" DevicePath \"\"" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.463464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "118f575d-3379-42eb-95e5-5a51903bb5fa" (UID: "118f575d-3379-42eb-95e5-5a51903bb5fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.548675 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118f575d-3379-42eb-95e5-5a51903bb5fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.736215 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z2zzs" event={"ID":"118f575d-3379-42eb-95e5-5a51903bb5fa","Type":"ContainerDied","Data":"55a89a7ea2c607f17553abe8b173e1e9bacf78c923e5c787c649802c8e8d59ca"} Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.736577 4865 scope.go:117] "RemoveContainer" containerID="158c67602a9afbb427d444b6ef5b6f8287e7e69c468999e02d8a98a2670fbfc0" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.736332 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z2zzs" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.773914 4865 scope.go:117] "RemoveContainer" containerID="42d090ef5639b532ab461a566a365e8809c139ff6e4d96d4854a7d83fa0551bb" Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.802490 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.813954 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z2zzs"] Jan 03 04:58:22 crc kubenswrapper[4865]: I0103 04:58:22.827131 4865 scope.go:117] "RemoveContainer" containerID="e8f2744d940bfa3511c14bb4f35c85b68be58c0889b30eb3dded089aad41ad1e" Jan 03 04:58:23 crc kubenswrapper[4865]: I0103 04:58:23.168138 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" path="/var/lib/kubelet/pods/118f575d-3379-42eb-95e5-5a51903bb5fa/volumes" Jan 03 04:58:29 crc kubenswrapper[4865]: I0103 04:58:29.156249 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:58:29 crc kubenswrapper[4865]: E0103 04:58:29.157174 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:58:43 crc kubenswrapper[4865]: I0103 04:58:43.161497 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:58:43 crc kubenswrapper[4865]: E0103 04:58:43.162423 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:58:57 crc kubenswrapper[4865]: I0103 04:58:57.156191 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:58:57 crc kubenswrapper[4865]: E0103 04:58:57.157419 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:59:09 crc kubenswrapper[4865]: I0103 04:59:09.156534 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:59:09 crc kubenswrapper[4865]: E0103 04:59:09.157596 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:59:20 crc kubenswrapper[4865]: I0103 04:59:20.155618 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:59:20 crc kubenswrapper[4865]: E0103 04:59:20.156732 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:59:26 crc kubenswrapper[4865]: I0103 04:59:26.376158 4865 generic.go:334] "Generic (PLEG): container finished" podID="0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" containerID="918a86b3017ccd0dd0bf39f88d197f06fbbd71725e2184f0807cad2b7b5cb485" exitCode=0 Jan 03 04:59:26 crc kubenswrapper[4865]: I0103 04:59:26.376370 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" event={"ID":"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab","Type":"ContainerDied","Data":"918a86b3017ccd0dd0bf39f88d197f06fbbd71725e2184f0807cad2b7b5cb485"} Jan 03 04:59:27 crc kubenswrapper[4865]: I0103 04:59:27.867892 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.026585 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.026944 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.026999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027135 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027168 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvjhm\" (UniqueName: \"kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027281 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.027302 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle\") pod \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\" (UID: \"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab\") " Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.032120 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm" (OuterVolumeSpecName: "kube-api-access-vvjhm") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "kube-api-access-vvjhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.045527 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.052960 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.053593 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.061590 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.064276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.064844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory" (OuterVolumeSpecName: "inventory") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.066956 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.074635 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" (UID: "0d3ac9c6-cfbf-4614-abb7-9a4338b90aab"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129675 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129732 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129755 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129773 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvjhm\" (UniqueName: \"kubernetes.io/projected/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-kube-api-access-vvjhm\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129791 4865 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129806 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129822 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129839 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.129853 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d3ac9c6-cfbf-4614-abb7-9a4338b90aab-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.395949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" event={"ID":"0d3ac9c6-cfbf-4614-abb7-9a4338b90aab","Type":"ContainerDied","Data":"b7da3f6b6b8feee36b562dc882c45eadaaf1734b7e850f9fea4e8bb521c51b1d"} Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.396003 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7da3f6b6b8feee36b562dc882c45eadaaf1734b7e850f9fea4e8bb521c51b1d" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.396005 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7kv5x" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.617393 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj"] Jan 03 04:59:28 crc kubenswrapper[4865]: E0103 04:59:28.617867 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="registry-server" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.617886 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="registry-server" Jan 03 04:59:28 crc kubenswrapper[4865]: E0103 04:59:28.617902 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="extract-utilities" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.617910 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="extract-utilities" Jan 03 04:59:28 crc kubenswrapper[4865]: E0103 04:59:28.617918 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="extract-content" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.617924 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="extract-content" Jan 03 04:59:28 crc kubenswrapper[4865]: E0103 04:59:28.617964 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.617970 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.618183 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="118f575d-3379-42eb-95e5-5a51903bb5fa" containerName="registry-server" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.618200 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3ac9c6-cfbf-4614-abb7-9a4338b90aab" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.619154 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.628746 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj"] Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.631511 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.631782 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.631907 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.635257 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.635364 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wp64" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739440 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdbg\" (UniqueName: \"kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739541 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739596 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739624 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.739720 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.841234 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.842900 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.843482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.843725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdbg\" (UniqueName: \"kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.844012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.844329 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.844680 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.846215 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.847067 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.848693 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.854066 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.857456 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.857994 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.879424 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdbg\" (UniqueName: \"kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:28 crc kubenswrapper[4865]: I0103 04:59:28.940981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 04:59:29 crc kubenswrapper[4865]: I0103 04:59:29.564992 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj"] Jan 03 04:59:30 crc kubenswrapper[4865]: I0103 04:59:30.418261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" event={"ID":"842e7570-e53d-4a45-91cf-d37579440783","Type":"ContainerStarted","Data":"4a7fe022c6ab454d03a26dbd14f549586e28dcbe6eaa8e45aa9c7ef6b0990bc1"} Jan 03 04:59:30 crc kubenswrapper[4865]: I0103 04:59:30.418322 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" event={"ID":"842e7570-e53d-4a45-91cf-d37579440783","Type":"ContainerStarted","Data":"901c759a6199c79e358d568a1a6b31c27a5c2e19cf2774a955bdd0f7e7dbd252"} Jan 03 04:59:32 crc kubenswrapper[4865]: I0103 04:59:32.156778 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:59:32 crc kubenswrapper[4865]: E0103 04:59:32.157741 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:59:47 crc kubenswrapper[4865]: I0103 04:59:47.155838 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:59:47 crc kubenswrapper[4865]: E0103 04:59:47.156675 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 04:59:59 crc kubenswrapper[4865]: I0103 04:59:59.156459 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 04:59:59 crc kubenswrapper[4865]: E0103 04:59:59.157811 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.156782 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" podStartSLOduration=31.575144389 podStartE2EDuration="32.156752555s" podCreationTimestamp="2026-01-03 04:59:28 +0000 UTC" firstStartedPulling="2026-01-03 04:59:29.569938698 +0000 UTC m=+2596.686991873" lastFinishedPulling="2026-01-03 04:59:30.151546844 +0000 UTC m=+2597.268600039" observedRunningTime="2026-01-03 04:59:30.442729773 +0000 UTC m=+2597.559782988" watchObservedRunningTime="2026-01-03 05:00:00.156752555 +0000 UTC m=+2627.273805780" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.177481 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf"] Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.179349 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.182996 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.184185 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.196288 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf"] Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.302143 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.302186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.302219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtxr\" (UniqueName: \"kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.404743 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.404796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.404857 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtxr\" (UniqueName: \"kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.405736 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.413232 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.426780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtxr\" (UniqueName: \"kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr\") pod \"collect-profiles-29456940-cw6pf\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.524813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:00 crc kubenswrapper[4865]: I0103 05:00:00.817427 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf"] Jan 03 05:00:01 crc kubenswrapper[4865]: I0103 05:00:01.749897 4865 generic.go:334] "Generic (PLEG): container finished" podID="04c1eb67-7d87-4d89-a3a1-e78bbb18de88" containerID="d2abc51d44ee9b6607e1fb33306ec9fdedd4a2fdbdc48a875ecfcce04e9c274e" exitCode=0 Jan 03 05:00:01 crc kubenswrapper[4865]: I0103 05:00:01.750279 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" event={"ID":"04c1eb67-7d87-4d89-a3a1-e78bbb18de88","Type":"ContainerDied","Data":"d2abc51d44ee9b6607e1fb33306ec9fdedd4a2fdbdc48a875ecfcce04e9c274e"} Jan 03 05:00:01 crc kubenswrapper[4865]: I0103 05:00:01.750325 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" event={"ID":"04c1eb67-7d87-4d89-a3a1-e78bbb18de88","Type":"ContainerStarted","Data":"e3c7a4a5733dab7b90581c3bbecd87159d38fe0681266b34a465442ebb0efa0e"} Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.119275 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.159906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume\") pod \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.160082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vtxr\" (UniqueName: \"kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr\") pod \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.160112 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume\") pod \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\" (UID: \"04c1eb67-7d87-4d89-a3a1-e78bbb18de88\") " Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.160758 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume" (OuterVolumeSpecName: "config-volume") pod "04c1eb67-7d87-4d89-a3a1-e78bbb18de88" (UID: "04c1eb67-7d87-4d89-a3a1-e78bbb18de88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.165820 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04c1eb67-7d87-4d89-a3a1-e78bbb18de88" (UID: "04c1eb67-7d87-4d89-a3a1-e78bbb18de88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.168257 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr" (OuterVolumeSpecName: "kube-api-access-6vtxr") pod "04c1eb67-7d87-4d89-a3a1-e78bbb18de88" (UID: "04c1eb67-7d87-4d89-a3a1-e78bbb18de88"). InnerVolumeSpecName "kube-api-access-6vtxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.262821 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.262981 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vtxr\" (UniqueName: \"kubernetes.io/projected/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-kube-api-access-6vtxr\") on node \"crc\" DevicePath \"\"" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.263006 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1eb67-7d87-4d89-a3a1-e78bbb18de88-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.775247 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" event={"ID":"04c1eb67-7d87-4d89-a3a1-e78bbb18de88","Type":"ContainerDied","Data":"e3c7a4a5733dab7b90581c3bbecd87159d38fe0681266b34a465442ebb0efa0e"} Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.775671 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c7a4a5733dab7b90581c3bbecd87159d38fe0681266b34a465442ebb0efa0e" Jan 03 05:00:03 crc kubenswrapper[4865]: I0103 05:00:03.775358 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456940-cw6pf" Jan 03 05:00:04 crc kubenswrapper[4865]: I0103 05:00:04.221908 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g"] Jan 03 05:00:04 crc kubenswrapper[4865]: I0103 05:00:04.233750 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456895-5vk8g"] Jan 03 05:00:05 crc kubenswrapper[4865]: I0103 05:00:05.177294 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6f7e21-c705-44d9-9092-0f2ed4a7cf60" path="/var/lib/kubelet/pods/ed6f7e21-c705-44d9-9092-0f2ed4a7cf60/volumes" Jan 03 05:00:12 crc kubenswrapper[4865]: I0103 05:00:12.156142 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:00:12 crc kubenswrapper[4865]: E0103 05:00:12.158775 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:00:23 crc kubenswrapper[4865]: I0103 05:00:23.169555 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:00:23 crc kubenswrapper[4865]: E0103 05:00:23.170655 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:00:36 crc kubenswrapper[4865]: I0103 05:00:36.968433 4865 scope.go:117] "RemoveContainer" containerID="27e952f3dbe4bd6a66b814eb48de750a9be61782767099ed8b6f3dcf6fb0483f" Jan 03 05:00:38 crc kubenswrapper[4865]: I0103 05:00:38.156349 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:00:38 crc kubenswrapper[4865]: E0103 05:00:38.157356 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:00:53 crc kubenswrapper[4865]: I0103 05:00:53.169527 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:00:53 crc kubenswrapper[4865]: E0103 05:00:53.170855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.171306 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29456941-p58zp"] Jan 03 05:01:00 crc kubenswrapper[4865]: E0103 05:01:00.172339 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c1eb67-7d87-4d89-a3a1-e78bbb18de88" containerName="collect-profiles" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.172359 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c1eb67-7d87-4d89-a3a1-e78bbb18de88" containerName="collect-profiles" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.172610 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c1eb67-7d87-4d89-a3a1-e78bbb18de88" containerName="collect-profiles" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.173412 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.181544 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29456941-p58zp"] Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.286529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.286673 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.286757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.286965 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqv2n\" (UniqueName: \"kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.388864 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqv2n\" (UniqueName: \"kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.389163 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.389280 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.389422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.395313 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.396486 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.396604 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.408074 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqv2n\" (UniqueName: \"kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n\") pod \"keystone-cron-29456941-p58zp\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:00 crc kubenswrapper[4865]: I0103 05:01:00.504981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:01 crc kubenswrapper[4865]: I0103 05:01:01.004933 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29456941-p58zp"] Jan 03 05:01:01 crc kubenswrapper[4865]: W0103 05:01:01.016696 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c76b1f_c632_4f93_add1_5d8150f79004.slice/crio-4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e WatchSource:0}: Error finding container 4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e: Status 404 returned error can't find the container with id 4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e Jan 03 05:01:01 crc kubenswrapper[4865]: I0103 05:01:01.378118 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29456941-p58zp" event={"ID":"74c76b1f-c632-4f93-add1-5d8150f79004","Type":"ContainerStarted","Data":"dde40d27ffaf6c7c9cc790c343d3b85b5ad50dcf53a2fcabb28691c6a2bfb233"} Jan 03 05:01:01 crc kubenswrapper[4865]: I0103 05:01:01.378167 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29456941-p58zp" event={"ID":"74c76b1f-c632-4f93-add1-5d8150f79004","Type":"ContainerStarted","Data":"4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e"} Jan 03 05:01:03 crc kubenswrapper[4865]: I0103 05:01:03.407065 4865 generic.go:334] "Generic (PLEG): container finished" podID="74c76b1f-c632-4f93-add1-5d8150f79004" containerID="dde40d27ffaf6c7c9cc790c343d3b85b5ad50dcf53a2fcabb28691c6a2bfb233" exitCode=0 Jan 03 05:01:03 crc kubenswrapper[4865]: I0103 05:01:03.407138 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29456941-p58zp" event={"ID":"74c76b1f-c632-4f93-add1-5d8150f79004","Type":"ContainerDied","Data":"dde40d27ffaf6c7c9cc790c343d3b85b5ad50dcf53a2fcabb28691c6a2bfb233"} Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.792258 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.914753 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data\") pod \"74c76b1f-c632-4f93-add1-5d8150f79004\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.915033 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqv2n\" (UniqueName: \"kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n\") pod \"74c76b1f-c632-4f93-add1-5d8150f79004\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.915147 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle\") pod \"74c76b1f-c632-4f93-add1-5d8150f79004\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.915188 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys\") pod \"74c76b1f-c632-4f93-add1-5d8150f79004\" (UID: \"74c76b1f-c632-4f93-add1-5d8150f79004\") " Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.921197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n" (OuterVolumeSpecName: "kube-api-access-kqv2n") pod "74c76b1f-c632-4f93-add1-5d8150f79004" (UID: "74c76b1f-c632-4f93-add1-5d8150f79004"). InnerVolumeSpecName "kube-api-access-kqv2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.921826 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "74c76b1f-c632-4f93-add1-5d8150f79004" (UID: "74c76b1f-c632-4f93-add1-5d8150f79004"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.942478 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74c76b1f-c632-4f93-add1-5d8150f79004" (UID: "74c76b1f-c632-4f93-add1-5d8150f79004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:01:04 crc kubenswrapper[4865]: I0103 05:01:04.977748 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data" (OuterVolumeSpecName: "config-data") pod "74c76b1f-c632-4f93-add1-5d8150f79004" (UID: "74c76b1f-c632-4f93-add1-5d8150f79004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.018160 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqv2n\" (UniqueName: \"kubernetes.io/projected/74c76b1f-c632-4f93-add1-5d8150f79004-kube-api-access-kqv2n\") on node \"crc\" DevicePath \"\"" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.018669 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.018809 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.019039 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c76b1f-c632-4f93-add1-5d8150f79004-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.431116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29456941-p58zp" event={"ID":"74c76b1f-c632-4f93-add1-5d8150f79004","Type":"ContainerDied","Data":"4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e"} Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.431162 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8129010183448325712329b348b6c2b814a179e03d8ca2c41564c0af731a0e" Jan 03 05:01:05 crc kubenswrapper[4865]: I0103 05:01:05.431215 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29456941-p58zp" Jan 03 05:01:07 crc kubenswrapper[4865]: I0103 05:01:07.155919 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:01:07 crc kubenswrapper[4865]: E0103 05:01:07.156744 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:01:20 crc kubenswrapper[4865]: I0103 05:01:20.155985 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:01:20 crc kubenswrapper[4865]: I0103 05:01:20.588125 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5"} Jan 03 05:02:31 crc kubenswrapper[4865]: I0103 05:02:31.353707 4865 generic.go:334] "Generic (PLEG): container finished" podID="842e7570-e53d-4a45-91cf-d37579440783" containerID="4a7fe022c6ab454d03a26dbd14f549586e28dcbe6eaa8e45aa9c7ef6b0990bc1" exitCode=0 Jan 03 05:02:31 crc kubenswrapper[4865]: I0103 05:02:31.353816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" event={"ID":"842e7570-e53d-4a45-91cf-d37579440783","Type":"ContainerDied","Data":"4a7fe022c6ab454d03a26dbd14f549586e28dcbe6eaa8e45aa9c7ef6b0990bc1"} Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.856287 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908490 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908545 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svdbg\" (UniqueName: \"kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908587 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908662 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908752 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.908794 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0\") pod \"842e7570-e53d-4a45-91cf-d37579440783\" (UID: \"842e7570-e53d-4a45-91cf-d37579440783\") " Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.914467 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.914947 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg" (OuterVolumeSpecName: "kube-api-access-svdbg") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "kube-api-access-svdbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.934705 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory" (OuterVolumeSpecName: "inventory") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.936796 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.939502 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.945280 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:32 crc kubenswrapper[4865]: I0103 05:02:32.946821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "842e7570-e53d-4a45-91cf-d37579440783" (UID: "842e7570-e53d-4a45-91cf-d37579440783"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010438 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010464 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-inventory\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010472 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010483 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010491 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010503 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svdbg\" (UniqueName: \"kubernetes.io/projected/842e7570-e53d-4a45-91cf-d37579440783-kube-api-access-svdbg\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.010512 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/842e7570-e53d-4a45-91cf-d37579440783-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.383972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" event={"ID":"842e7570-e53d-4a45-91cf-d37579440783","Type":"ContainerDied","Data":"901c759a6199c79e358d568a1a6b31c27a5c2e19cf2774a955bdd0f7e7dbd252"} Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.384640 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901c759a6199c79e358d568a1a6b31c27a5c2e19cf2774a955bdd0f7e7dbd252" Jan 03 05:02:33 crc kubenswrapper[4865]: I0103 05:02:33.384045 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj" Jan 03 05:02:37 crc kubenswrapper[4865]: I0103 05:02:37.081159 4865 scope.go:117] "RemoveContainer" containerID="fcfc83e419f20daa7af3783ca3c9f82e657924101f98f57b6601e16fe0a04626" Jan 03 05:02:37 crc kubenswrapper[4865]: I0103 05:02:37.118412 4865 scope.go:117] "RemoveContainer" containerID="e1967fa4f080c8d8288d3012e7d05f60d5e62cd569570f41aab292b11b0051fa" Jan 03 05:02:37 crc kubenswrapper[4865]: I0103 05:02:37.196970 4865 scope.go:117] "RemoveContainer" containerID="5c60a68be79ff4743dda0a244483066c30aad9c642129814cf8915ec4a5d14d0" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.747941 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 05:03:26 crc kubenswrapper[4865]: E0103 05:03:26.749029 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c76b1f-c632-4f93-add1-5d8150f79004" containerName="keystone-cron" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.749049 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c76b1f-c632-4f93-add1-5d8150f79004" containerName="keystone-cron" Jan 03 05:03:26 crc kubenswrapper[4865]: E0103 05:03:26.749066 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842e7570-e53d-4a45-91cf-d37579440783" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.749076 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="842e7570-e53d-4a45-91cf-d37579440783" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.749322 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c76b1f-c632-4f93-add1-5d8150f79004" containerName="keystone-cron" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.749358 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="842e7570-e53d-4a45-91cf-d37579440783" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.750181 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.753560 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.753793 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.754025 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.755412 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-smm7z" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.758595 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.860766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.860810 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.860912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.860942 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.860970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.861041 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.861077 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.861145 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.861222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68sz8\" (UniqueName: \"kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962442 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68sz8\" (UniqueName: \"kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962519 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962615 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962651 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.962700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.963032 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.963754 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.964922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.965813 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.966105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.987262 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.988010 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:26 crc kubenswrapper[4865]: I0103 05:03:26.989963 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.017512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68sz8\" (UniqueName: \"kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.025599 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " pod="openstack/tempest-tests-tempest" Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.106220 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.628755 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.652084 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:03:27 crc kubenswrapper[4865]: I0103 05:03:27.975603 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d51a1b58-dba9-4c1f-87bb-bce07ad57852","Type":"ContainerStarted","Data":"a10d33fdb4a01907a596eda8e6092b449ceee810d5cf00c76bc270a892f4e8fc"} Jan 03 05:03:40 crc kubenswrapper[4865]: I0103 05:03:40.739604 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:03:40 crc kubenswrapper[4865]: I0103 05:03:40.740352 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.550842 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.553675 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.570696 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.702298 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.702345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.702374 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jb5\" (UniqueName: \"kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.804116 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.804167 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.804196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jb5\" (UniqueName: \"kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.804916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.805126 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.833318 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jb5\" (UniqueName: \"kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5\") pod \"community-operators-cbvzq\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:03:45 crc kubenswrapper[4865]: I0103 05:03:45.902580 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:05 crc kubenswrapper[4865]: E0103 05:04:05.920537 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 03 05:04:05 crc kubenswrapper[4865]: E0103 05:04:05.921862 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d51a1b58-dba9-4c1f-87bb-bce07ad57852): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 03 05:04:05 crc kubenswrapper[4865]: E0103 05:04:05.923410 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" Jan 03 05:04:06 crc kubenswrapper[4865]: I0103 05:04:06.337311 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:04:06 crc kubenswrapper[4865]: W0103 05:04:06.347851 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d01724b_8590_4ddb_941e_4f41e7a81518.slice/crio-70e41b557f70383432d4f1e955199ef29e3366a5f1dd4b368835c1f6519a97ba WatchSource:0}: Error finding container 70e41b557f70383432d4f1e955199ef29e3366a5f1dd4b368835c1f6519a97ba: Status 404 returned error can't find the container with id 70e41b557f70383432d4f1e955199ef29e3366a5f1dd4b368835c1f6519a97ba Jan 03 05:04:06 crc kubenswrapper[4865]: I0103 05:04:06.370168 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerStarted","Data":"70e41b557f70383432d4f1e955199ef29e3366a5f1dd4b368835c1f6519a97ba"} Jan 03 05:04:06 crc kubenswrapper[4865]: E0103 05:04:06.372584 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" Jan 03 05:04:07 crc kubenswrapper[4865]: I0103 05:04:07.380633 4865 generic.go:334] "Generic (PLEG): container finished" podID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerID="5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d" exitCode=0 Jan 03 05:04:07 crc kubenswrapper[4865]: I0103 05:04:07.380755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerDied","Data":"5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d"} Jan 03 05:04:09 crc kubenswrapper[4865]: I0103 05:04:09.406315 4865 generic.go:334] "Generic (PLEG): container finished" podID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerID="ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8" exitCode=0 Jan 03 05:04:09 crc kubenswrapper[4865]: I0103 05:04:09.406401 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerDied","Data":"ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8"} Jan 03 05:04:10 crc kubenswrapper[4865]: I0103 05:04:10.419254 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerStarted","Data":"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c"} Jan 03 05:04:10 crc kubenswrapper[4865]: I0103 05:04:10.448875 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cbvzq" podStartSLOduration=22.96986761 podStartE2EDuration="25.448855822s" podCreationTimestamp="2026-01-03 05:03:45 +0000 UTC" firstStartedPulling="2026-01-03 05:04:07.384035576 +0000 UTC m=+2874.501088761" lastFinishedPulling="2026-01-03 05:04:09.863023758 +0000 UTC m=+2876.980076973" observedRunningTime="2026-01-03 05:04:10.442551211 +0000 UTC m=+2877.559604406" watchObservedRunningTime="2026-01-03 05:04:10.448855822 +0000 UTC m=+2877.565909007" Jan 03 05:04:10 crc kubenswrapper[4865]: I0103 05:04:10.739533 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:04:10 crc kubenswrapper[4865]: I0103 05:04:10.739602 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:04:15 crc kubenswrapper[4865]: I0103 05:04:15.902809 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:15 crc kubenswrapper[4865]: I0103 05:04:15.903243 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:15 crc kubenswrapper[4865]: I0103 05:04:15.970512 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:16 crc kubenswrapper[4865]: I0103 05:04:16.568648 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:16 crc kubenswrapper[4865]: I0103 05:04:16.741556 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:04:18 crc kubenswrapper[4865]: I0103 05:04:18.501610 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cbvzq" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="registry-server" containerID="cri-o://1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c" gracePeriod=2 Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.017211 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.129204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7jb5\" (UniqueName: \"kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5\") pod \"5d01724b-8590-4ddb-941e-4f41e7a81518\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.129361 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities\") pod \"5d01724b-8590-4ddb-941e-4f41e7a81518\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.129447 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content\") pod \"5d01724b-8590-4ddb-941e-4f41e7a81518\" (UID: \"5d01724b-8590-4ddb-941e-4f41e7a81518\") " Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.130570 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities" (OuterVolumeSpecName: "utilities") pod "5d01724b-8590-4ddb-941e-4f41e7a81518" (UID: "5d01724b-8590-4ddb-941e-4f41e7a81518"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.135827 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5" (OuterVolumeSpecName: "kube-api-access-g7jb5") pod "5d01724b-8590-4ddb-941e-4f41e7a81518" (UID: "5d01724b-8590-4ddb-941e-4f41e7a81518"). InnerVolumeSpecName "kube-api-access-g7jb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.231532 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7jb5\" (UniqueName: \"kubernetes.io/projected/5d01724b-8590-4ddb-941e-4f41e7a81518-kube-api-access-g7jb5\") on node \"crc\" DevicePath \"\"" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.231563 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.422353 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d01724b-8590-4ddb-941e-4f41e7a81518" (UID: "5d01724b-8590-4ddb-941e-4f41e7a81518"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.434747 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d01724b-8590-4ddb-941e-4f41e7a81518-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.513342 4865 generic.go:334] "Generic (PLEG): container finished" podID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerID="1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c" exitCode=0 Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.513431 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbvzq" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.513427 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerDied","Data":"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c"} Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.513614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbvzq" event={"ID":"5d01724b-8590-4ddb-941e-4f41e7a81518","Type":"ContainerDied","Data":"70e41b557f70383432d4f1e955199ef29e3366a5f1dd4b368835c1f6519a97ba"} Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.513690 4865 scope.go:117] "RemoveContainer" containerID="1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.550855 4865 scope.go:117] "RemoveContainer" containerID="ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.556279 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.565254 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cbvzq"] Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.586909 4865 scope.go:117] "RemoveContainer" containerID="5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.627227 4865 scope.go:117] "RemoveContainer" containerID="1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c" Jan 03 05:04:19 crc kubenswrapper[4865]: E0103 05:04:19.628089 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c\": container with ID starting with 1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c not found: ID does not exist" containerID="1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.628142 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c"} err="failed to get container status \"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c\": rpc error: code = NotFound desc = could not find container \"1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c\": container with ID starting with 1acbefc323a817a41ca7dae5c94d228ce1bbbb2d99ff50cd54ca25a6eab8457c not found: ID does not exist" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.628179 4865 scope.go:117] "RemoveContainer" containerID="ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8" Jan 03 05:04:19 crc kubenswrapper[4865]: E0103 05:04:19.628668 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8\": container with ID starting with ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8 not found: ID does not exist" containerID="ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.628733 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8"} err="failed to get container status \"ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8\": rpc error: code = NotFound desc = could not find container \"ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8\": container with ID starting with ddfe9db75a13998739fc41a0346e31f03224e32cbdca45524f9f63fc31b726a8 not found: ID does not exist" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.628782 4865 scope.go:117] "RemoveContainer" containerID="5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d" Jan 03 05:04:19 crc kubenswrapper[4865]: E0103 05:04:19.629269 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d\": container with ID starting with 5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d not found: ID does not exist" containerID="5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d" Jan 03 05:04:19 crc kubenswrapper[4865]: I0103 05:04:19.629641 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d"} err="failed to get container status \"5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d\": rpc error: code = NotFound desc = could not find container \"5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d\": container with ID starting with 5e4de261ad8c368d6a8884057729fab33c62bc7d59cc0363d29d6ce5bea2517d not found: ID does not exist" Jan 03 05:04:21 crc kubenswrapper[4865]: I0103 05:04:21.175612 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" path="/var/lib/kubelet/pods/5d01724b-8590-4ddb-941e-4f41e7a81518/volumes" Jan 03 05:04:21 crc kubenswrapper[4865]: I0103 05:04:21.370426 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 03 05:04:22 crc kubenswrapper[4865]: I0103 05:04:22.580678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d51a1b58-dba9-4c1f-87bb-bce07ad57852","Type":"ContainerStarted","Data":"a264d5b271b8dee798e719022af5b48dfeb881b0b615fc7f7c5b2b1454945e1b"} Jan 03 05:04:22 crc kubenswrapper[4865]: I0103 05:04:22.612805 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.897804247 podStartE2EDuration="57.612784016s" podCreationTimestamp="2026-01-03 05:03:25 +0000 UTC" firstStartedPulling="2026-01-03 05:03:27.651908432 +0000 UTC m=+2834.768961607" lastFinishedPulling="2026-01-03 05:04:21.366888151 +0000 UTC m=+2888.483941376" observedRunningTime="2026-01-03 05:04:22.603982086 +0000 UTC m=+2889.721035291" watchObservedRunningTime="2026-01-03 05:04:22.612784016 +0000 UTC m=+2889.729837211" Jan 03 05:04:40 crc kubenswrapper[4865]: I0103 05:04:40.739628 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:04:40 crc kubenswrapper[4865]: I0103 05:04:40.740258 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:04:40 crc kubenswrapper[4865]: I0103 05:04:40.740359 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:04:40 crc kubenswrapper[4865]: I0103 05:04:40.741230 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:04:40 crc kubenswrapper[4865]: I0103 05:04:40.741314 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5" gracePeriod=600 Jan 03 05:04:41 crc kubenswrapper[4865]: I0103 05:04:41.784950 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5" exitCode=0 Jan 03 05:04:41 crc kubenswrapper[4865]: I0103 05:04:41.785061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5"} Jan 03 05:04:41 crc kubenswrapper[4865]: I0103 05:04:41.785585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085"} Jan 03 05:04:41 crc kubenswrapper[4865]: I0103 05:04:41.785604 4865 scope.go:117] "RemoveContainer" containerID="dda02d3035d97396a7077173f9433e5a136b060b841dcb53c13c82969b21cc21" Jan 03 05:07:10 crc kubenswrapper[4865]: I0103 05:07:10.739153 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:07:10 crc kubenswrapper[4865]: I0103 05:07:10.739899 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.381187 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:22 crc kubenswrapper[4865]: E0103 05:07:22.382371 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="registry-server" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.382406 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="registry-server" Jan 03 05:07:22 crc kubenswrapper[4865]: E0103 05:07:22.382439 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="extract-utilities" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.382449 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="extract-utilities" Jan 03 05:07:22 crc kubenswrapper[4865]: E0103 05:07:22.382473 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="extract-content" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.382483 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="extract-content" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.382727 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d01724b-8590-4ddb-941e-4f41e7a81518" containerName="registry-server" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.384455 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.389585 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.514061 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.514711 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rht85\" (UniqueName: \"kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.514784 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.617090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rht85\" (UniqueName: \"kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.617164 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.617214 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.617710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.618012 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.646841 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rht85\" (UniqueName: \"kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85\") pod \"certified-operators-vmgf5\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:22 crc kubenswrapper[4865]: I0103 05:07:22.708532 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:23 crc kubenswrapper[4865]: I0103 05:07:23.229890 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:23 crc kubenswrapper[4865]: I0103 05:07:23.686493 4865 generic.go:334] "Generic (PLEG): container finished" podID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerID="75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f" exitCode=0 Jan 03 05:07:23 crc kubenswrapper[4865]: I0103 05:07:23.686668 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerDied","Data":"75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f"} Jan 03 05:07:23 crc kubenswrapper[4865]: I0103 05:07:23.686788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerStarted","Data":"c0a7457c285174776d476be3170e4d6258f7937ce0e019ed1ad3f52e21cd21f4"} Jan 03 05:07:24 crc kubenswrapper[4865]: I0103 05:07:24.696200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerStarted","Data":"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40"} Jan 03 05:07:25 crc kubenswrapper[4865]: I0103 05:07:25.712054 4865 generic.go:334] "Generic (PLEG): container finished" podID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerID="4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40" exitCode=0 Jan 03 05:07:25 crc kubenswrapper[4865]: I0103 05:07:25.712186 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerDied","Data":"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40"} Jan 03 05:07:26 crc kubenswrapper[4865]: I0103 05:07:26.723201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerStarted","Data":"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c"} Jan 03 05:07:26 crc kubenswrapper[4865]: I0103 05:07:26.747354 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vmgf5" podStartSLOduration=2.278689278 podStartE2EDuration="4.747335819s" podCreationTimestamp="2026-01-03 05:07:22 +0000 UTC" firstStartedPulling="2026-01-03 05:07:23.728248253 +0000 UTC m=+3070.845301448" lastFinishedPulling="2026-01-03 05:07:26.196894784 +0000 UTC m=+3073.313947989" observedRunningTime="2026-01-03 05:07:26.742004414 +0000 UTC m=+3073.859057619" watchObservedRunningTime="2026-01-03 05:07:26.747335819 +0000 UTC m=+3073.864388994" Jan 03 05:07:32 crc kubenswrapper[4865]: I0103 05:07:32.708896 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:32 crc kubenswrapper[4865]: I0103 05:07:32.709528 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:32 crc kubenswrapper[4865]: I0103 05:07:32.767229 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:32 crc kubenswrapper[4865]: I0103 05:07:32.863458 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:33 crc kubenswrapper[4865]: I0103 05:07:33.018418 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:34 crc kubenswrapper[4865]: I0103 05:07:34.803982 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vmgf5" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="registry-server" containerID="cri-o://04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c" gracePeriod=2 Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.382132 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.476770 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities\") pod \"1497d6b7-e98e-4f93-88a6-77edaa648e38\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.476837 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content\") pod \"1497d6b7-e98e-4f93-88a6-77edaa648e38\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.477039 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rht85\" (UniqueName: \"kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85\") pod \"1497d6b7-e98e-4f93-88a6-77edaa648e38\" (UID: \"1497d6b7-e98e-4f93-88a6-77edaa648e38\") " Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.477697 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities" (OuterVolumeSpecName: "utilities") pod "1497d6b7-e98e-4f93-88a6-77edaa648e38" (UID: "1497d6b7-e98e-4f93-88a6-77edaa648e38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.482449 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85" (OuterVolumeSpecName: "kube-api-access-rht85") pod "1497d6b7-e98e-4f93-88a6-77edaa648e38" (UID: "1497d6b7-e98e-4f93-88a6-77edaa648e38"). InnerVolumeSpecName "kube-api-access-rht85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.540281 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1497d6b7-e98e-4f93-88a6-77edaa648e38" (UID: "1497d6b7-e98e-4f93-88a6-77edaa648e38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.579478 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.579514 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1497d6b7-e98e-4f93-88a6-77edaa648e38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.579526 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rht85\" (UniqueName: \"kubernetes.io/projected/1497d6b7-e98e-4f93-88a6-77edaa648e38-kube-api-access-rht85\") on node \"crc\" DevicePath \"\"" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.817520 4865 generic.go:334] "Generic (PLEG): container finished" podID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerID="04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c" exitCode=0 Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.817613 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmgf5" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.817646 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerDied","Data":"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c"} Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.817999 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmgf5" event={"ID":"1497d6b7-e98e-4f93-88a6-77edaa648e38","Type":"ContainerDied","Data":"c0a7457c285174776d476be3170e4d6258f7937ce0e019ed1ad3f52e21cd21f4"} Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.818032 4865 scope.go:117] "RemoveContainer" containerID="04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.847724 4865 scope.go:117] "RemoveContainer" containerID="4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.873747 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.881178 4865 scope.go:117] "RemoveContainer" containerID="75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.887051 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vmgf5"] Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.921260 4865 scope.go:117] "RemoveContainer" containerID="04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c" Jan 03 05:07:35 crc kubenswrapper[4865]: E0103 05:07:35.921746 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c\": container with ID starting with 04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c not found: ID does not exist" containerID="04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.921794 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c"} err="failed to get container status \"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c\": rpc error: code = NotFound desc = could not find container \"04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c\": container with ID starting with 04e49bed3344d5f2b534d6edba22dfd6fb88e75356d73e393d3ddaec5e60e78c not found: ID does not exist" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.921820 4865 scope.go:117] "RemoveContainer" containerID="4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40" Jan 03 05:07:35 crc kubenswrapper[4865]: E0103 05:07:35.922279 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40\": container with ID starting with 4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40 not found: ID does not exist" containerID="4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.922327 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40"} err="failed to get container status \"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40\": rpc error: code = NotFound desc = could not find container \"4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40\": container with ID starting with 4cdbbecadc21e1d66b54c148bd428f48b21014f5db295b824ee9b4a128f1fd40 not found: ID does not exist" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.922362 4865 scope.go:117] "RemoveContainer" containerID="75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f" Jan 03 05:07:35 crc kubenswrapper[4865]: E0103 05:07:35.922715 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f\": container with ID starting with 75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f not found: ID does not exist" containerID="75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f" Jan 03 05:07:35 crc kubenswrapper[4865]: I0103 05:07:35.922742 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f"} err="failed to get container status \"75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f\": rpc error: code = NotFound desc = could not find container \"75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f\": container with ID starting with 75cf246e0466db8b273dade03433ab7a863edf2f049c4ed8e571e53b51c5a85f not found: ID does not exist" Jan 03 05:07:37 crc kubenswrapper[4865]: I0103 05:07:37.186435 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" path="/var/lib/kubelet/pods/1497d6b7-e98e-4f93-88a6-77edaa648e38/volumes" Jan 03 05:07:40 crc kubenswrapper[4865]: I0103 05:07:40.739251 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:07:40 crc kubenswrapper[4865]: I0103 05:07:40.741296 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:08:10 crc kubenswrapper[4865]: I0103 05:08:10.739717 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:08:10 crc kubenswrapper[4865]: I0103 05:08:10.740134 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:08:10 crc kubenswrapper[4865]: I0103 05:08:10.740169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:08:10 crc kubenswrapper[4865]: I0103 05:08:10.740623 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:08:10 crc kubenswrapper[4865]: I0103 05:08:10.740665 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" gracePeriod=600 Jan 03 05:08:10 crc kubenswrapper[4865]: E0103 05:08:10.862164 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:08:11 crc kubenswrapper[4865]: I0103 05:08:11.179255 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" exitCode=0 Jan 03 05:08:11 crc kubenswrapper[4865]: I0103 05:08:11.179333 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085"} Jan 03 05:08:11 crc kubenswrapper[4865]: I0103 05:08:11.180172 4865 scope.go:117] "RemoveContainer" containerID="738857988f8d76108a410135a92f75e1f4ab561a47759f719afa92493ba35eb5" Jan 03 05:08:11 crc kubenswrapper[4865]: I0103 05:08:11.180831 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:08:11 crc kubenswrapper[4865]: E0103 05:08:11.181139 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:08:25 crc kubenswrapper[4865]: I0103 05:08:25.156477 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:08:25 crc kubenswrapper[4865]: E0103 05:08:25.157484 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.360194 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:27 crc kubenswrapper[4865]: E0103 05:08:27.360929 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="extract-utilities" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.360945 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="extract-utilities" Jan 03 05:08:27 crc kubenswrapper[4865]: E0103 05:08:27.360970 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="registry-server" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.360978 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="registry-server" Jan 03 05:08:27 crc kubenswrapper[4865]: E0103 05:08:27.361003 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="extract-content" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.361012 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="extract-content" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.361248 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1497d6b7-e98e-4f93-88a6-77edaa648e38" containerName="registry-server" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.362810 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.385831 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.459147 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.459272 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lrg\" (UniqueName: \"kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.459308 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.561755 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44lrg\" (UniqueName: \"kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.561818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.561952 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.562945 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.563258 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.582498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44lrg\" (UniqueName: \"kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg\") pod \"redhat-operators-bj8gr\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:27 crc kubenswrapper[4865]: I0103 05:08:27.700083 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:28 crc kubenswrapper[4865]: I0103 05:08:28.152496 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:28 crc kubenswrapper[4865]: I0103 05:08:28.388909 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerID="b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d" exitCode=0 Jan 03 05:08:28 crc kubenswrapper[4865]: I0103 05:08:28.388970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerDied","Data":"b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d"} Jan 03 05:08:28 crc kubenswrapper[4865]: I0103 05:08:28.389300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerStarted","Data":"7b29b07f9c3436ed798908e5f68bbf2485a8d8ead4805883761b6bea39bd0459"} Jan 03 05:08:28 crc kubenswrapper[4865]: I0103 05:08:28.391140 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:08:29 crc kubenswrapper[4865]: I0103 05:08:29.398419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerStarted","Data":"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8"} Jan 03 05:08:30 crc kubenswrapper[4865]: I0103 05:08:30.410304 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerID="aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8" exitCode=0 Jan 03 05:08:30 crc kubenswrapper[4865]: I0103 05:08:30.410356 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerDied","Data":"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8"} Jan 03 05:08:33 crc kubenswrapper[4865]: I0103 05:08:33.461956 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerStarted","Data":"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98"} Jan 03 05:08:33 crc kubenswrapper[4865]: I0103 05:08:33.490149 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bj8gr" podStartSLOduration=1.798115819 podStartE2EDuration="6.490126212s" podCreationTimestamp="2026-01-03 05:08:27 +0000 UTC" firstStartedPulling="2026-01-03 05:08:28.39092058 +0000 UTC m=+3135.507973755" lastFinishedPulling="2026-01-03 05:08:33.082930953 +0000 UTC m=+3140.199984148" observedRunningTime="2026-01-03 05:08:33.48079973 +0000 UTC m=+3140.597852925" watchObservedRunningTime="2026-01-03 05:08:33.490126212 +0000 UTC m=+3140.607179407" Jan 03 05:08:37 crc kubenswrapper[4865]: I0103 05:08:37.700409 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:37 crc kubenswrapper[4865]: I0103 05:08:37.701081 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:38 crc kubenswrapper[4865]: I0103 05:08:38.768009 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bj8gr" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="registry-server" probeResult="failure" output=< Jan 03 05:08:38 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 05:08:38 crc kubenswrapper[4865]: > Jan 03 05:08:39 crc kubenswrapper[4865]: I0103 05:08:39.155938 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:08:39 crc kubenswrapper[4865]: E0103 05:08:39.156266 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:08:47 crc kubenswrapper[4865]: I0103 05:08:47.785308 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:47 crc kubenswrapper[4865]: I0103 05:08:47.868355 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:48 crc kubenswrapper[4865]: I0103 05:08:48.028019 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:49 crc kubenswrapper[4865]: I0103 05:08:49.630039 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bj8gr" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="registry-server" containerID="cri-o://39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98" gracePeriod=2 Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.223859 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.397028 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities\") pod \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.397116 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44lrg\" (UniqueName: \"kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg\") pod \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.397265 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content\") pod \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\" (UID: \"e1e93307-79eb-4406-9a1f-48a1e9e779c6\") " Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.398021 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities" (OuterVolumeSpecName: "utilities") pod "e1e93307-79eb-4406-9a1f-48a1e9e779c6" (UID: "e1e93307-79eb-4406-9a1f-48a1e9e779c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.402051 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg" (OuterVolumeSpecName: "kube-api-access-44lrg") pod "e1e93307-79eb-4406-9a1f-48a1e9e779c6" (UID: "e1e93307-79eb-4406-9a1f-48a1e9e779c6"). InnerVolumeSpecName "kube-api-access-44lrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.499079 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.499485 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44lrg\" (UniqueName: \"kubernetes.io/projected/e1e93307-79eb-4406-9a1f-48a1e9e779c6-kube-api-access-44lrg\") on node \"crc\" DevicePath \"\"" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.513560 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1e93307-79eb-4406-9a1f-48a1e9e779c6" (UID: "e1e93307-79eb-4406-9a1f-48a1e9e779c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.600780 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e93307-79eb-4406-9a1f-48a1e9e779c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.645485 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerID="39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98" exitCode=0 Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.645553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerDied","Data":"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98"} Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.645581 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bj8gr" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.645622 4865 scope.go:117] "RemoveContainer" containerID="39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.645609 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bj8gr" event={"ID":"e1e93307-79eb-4406-9a1f-48a1e9e779c6","Type":"ContainerDied","Data":"7b29b07f9c3436ed798908e5f68bbf2485a8d8ead4805883761b6bea39bd0459"} Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.681512 4865 scope.go:117] "RemoveContainer" containerID="aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.695972 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.708641 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bj8gr"] Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.788954 4865 scope.go:117] "RemoveContainer" containerID="b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.815270 4865 scope.go:117] "RemoveContainer" containerID="39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98" Jan 03 05:08:50 crc kubenswrapper[4865]: E0103 05:08:50.815969 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98\": container with ID starting with 39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98 not found: ID does not exist" containerID="39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.816025 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98"} err="failed to get container status \"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98\": rpc error: code = NotFound desc = could not find container \"39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98\": container with ID starting with 39f2924ce72a73848eeeecd7a73f9a46da8d7c275541bdb4aad781f03e0a7f98 not found: ID does not exist" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.816059 4865 scope.go:117] "RemoveContainer" containerID="aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8" Jan 03 05:08:50 crc kubenswrapper[4865]: E0103 05:08:50.816534 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8\": container with ID starting with aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8 not found: ID does not exist" containerID="aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.816602 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8"} err="failed to get container status \"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8\": rpc error: code = NotFound desc = could not find container \"aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8\": container with ID starting with aba5a106673c1f9d9d868e38b439e7b3a47e228358d43f1318aaeb55f33203b8 not found: ID does not exist" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.816637 4865 scope.go:117] "RemoveContainer" containerID="b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d" Jan 03 05:08:50 crc kubenswrapper[4865]: E0103 05:08:50.817129 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d\": container with ID starting with b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d not found: ID does not exist" containerID="b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d" Jan 03 05:08:50 crc kubenswrapper[4865]: I0103 05:08:50.817164 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d"} err="failed to get container status \"b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d\": rpc error: code = NotFound desc = could not find container \"b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d\": container with ID starting with b61cbcd2842fd682aaf50ac1942febc4ab94419243efd4d2648d679c1b08788d not found: ID does not exist" Jan 03 05:08:51 crc kubenswrapper[4865]: I0103 05:08:51.168881 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" path="/var/lib/kubelet/pods/e1e93307-79eb-4406-9a1f-48a1e9e779c6/volumes" Jan 03 05:08:52 crc kubenswrapper[4865]: I0103 05:08:52.156715 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:08:52 crc kubenswrapper[4865]: E0103 05:08:52.157617 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:09:03 crc kubenswrapper[4865]: I0103 05:09:03.167059 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:09:03 crc kubenswrapper[4865]: E0103 05:09:03.168096 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:09:18 crc kubenswrapper[4865]: I0103 05:09:18.155739 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:09:18 crc kubenswrapper[4865]: E0103 05:09:18.156808 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:09:30 crc kubenswrapper[4865]: I0103 05:09:30.156686 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:09:30 crc kubenswrapper[4865]: E0103 05:09:30.157880 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:09:43 crc kubenswrapper[4865]: I0103 05:09:43.175920 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:09:43 crc kubenswrapper[4865]: E0103 05:09:43.176970 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:09:56 crc kubenswrapper[4865]: I0103 05:09:56.156561 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:09:56 crc kubenswrapper[4865]: E0103 05:09:56.157341 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:10:07 crc kubenswrapper[4865]: I0103 05:10:07.156183 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:10:07 crc kubenswrapper[4865]: E0103 05:10:07.157756 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:10:21 crc kubenswrapper[4865]: I0103 05:10:21.156992 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:10:21 crc kubenswrapper[4865]: E0103 05:10:21.157835 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:10:36 crc kubenswrapper[4865]: I0103 05:10:36.156627 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:10:36 crc kubenswrapper[4865]: E0103 05:10:36.157718 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:10:47 crc kubenswrapper[4865]: I0103 05:10:47.156565 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:10:47 crc kubenswrapper[4865]: E0103 05:10:47.158033 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:10:58 crc kubenswrapper[4865]: I0103 05:10:58.155799 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:10:58 crc kubenswrapper[4865]: E0103 05:10:58.156592 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:11:11 crc kubenswrapper[4865]: I0103 05:11:11.160480 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:11:11 crc kubenswrapper[4865]: E0103 05:11:11.161473 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:11:23 crc kubenswrapper[4865]: I0103 05:11:23.169128 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:11:23 crc kubenswrapper[4865]: E0103 05:11:23.170206 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:11:38 crc kubenswrapper[4865]: I0103 05:11:38.156172 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:11:38 crc kubenswrapper[4865]: E0103 05:11:38.157180 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:11:51 crc kubenswrapper[4865]: I0103 05:11:51.156296 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:11:51 crc kubenswrapper[4865]: E0103 05:11:51.157293 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:12:04 crc kubenswrapper[4865]: I0103 05:12:04.156218 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:12:04 crc kubenswrapper[4865]: E0103 05:12:04.157477 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:12:15 crc kubenswrapper[4865]: I0103 05:12:15.157087 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:12:15 crc kubenswrapper[4865]: E0103 05:12:15.159268 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:12:28 crc kubenswrapper[4865]: I0103 05:12:28.156029 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:12:28 crc kubenswrapper[4865]: E0103 05:12:28.157166 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:12:43 crc kubenswrapper[4865]: I0103 05:12:43.162057 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:12:43 crc kubenswrapper[4865]: E0103 05:12:43.163223 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:12:58 crc kubenswrapper[4865]: I0103 05:12:58.155630 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:12:58 crc kubenswrapper[4865]: E0103 05:12:58.156362 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:13:10 crc kubenswrapper[4865]: I0103 05:13:10.157275 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:13:10 crc kubenswrapper[4865]: E0103 05:13:10.157969 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.554160 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:21 crc kubenswrapper[4865]: E0103 05:13:21.555417 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="extract-content" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.555440 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="extract-content" Jan 03 05:13:21 crc kubenswrapper[4865]: E0103 05:13:21.555466 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="registry-server" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.555479 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="registry-server" Jan 03 05:13:21 crc kubenswrapper[4865]: E0103 05:13:21.555503 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="extract-utilities" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.555516 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="extract-utilities" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.555822 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e93307-79eb-4406-9a1f-48a1e9e779c6" containerName="registry-server" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.560481 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.571333 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.610115 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbds\" (UniqueName: \"kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.610413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.610643 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.712110 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbds\" (UniqueName: \"kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.712432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.712769 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.713093 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.713220 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.737150 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbds\" (UniqueName: \"kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds\") pod \"redhat-marketplace-tgcxm\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:21 crc kubenswrapper[4865]: I0103 05:13:21.934306 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:22 crc kubenswrapper[4865]: I0103 05:13:22.441648 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:23 crc kubenswrapper[4865]: I0103 05:13:23.402358 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerID="027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b" exitCode=0 Jan 03 05:13:23 crc kubenswrapper[4865]: I0103 05:13:23.402496 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerDied","Data":"027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b"} Jan 03 05:13:23 crc kubenswrapper[4865]: I0103 05:13:23.402853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerStarted","Data":"a0fec67b02dbe58a2235c7908381862dbc8ecdde9e44af6a148d88b7f62f61b9"} Jan 03 05:13:24 crc kubenswrapper[4865]: I0103 05:13:24.417711 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerID="9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6" exitCode=0 Jan 03 05:13:24 crc kubenswrapper[4865]: I0103 05:13:24.417761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerDied","Data":"9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6"} Jan 03 05:13:25 crc kubenswrapper[4865]: I0103 05:13:25.156605 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:13:25 crc kubenswrapper[4865]: I0103 05:13:25.428862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd"} Jan 03 05:13:25 crc kubenswrapper[4865]: I0103 05:13:25.433522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerStarted","Data":"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647"} Jan 03 05:13:25 crc kubenswrapper[4865]: I0103 05:13:25.478809 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgcxm" podStartSLOduration=2.93653774 podStartE2EDuration="4.47878393s" podCreationTimestamp="2026-01-03 05:13:21 +0000 UTC" firstStartedPulling="2026-01-03 05:13:23.404890991 +0000 UTC m=+3430.521944176" lastFinishedPulling="2026-01-03 05:13:24.947137171 +0000 UTC m=+3432.064190366" observedRunningTime="2026-01-03 05:13:25.476854798 +0000 UTC m=+3432.593908023" watchObservedRunningTime="2026-01-03 05:13:25.47878393 +0000 UTC m=+3432.595837155" Jan 03 05:13:31 crc kubenswrapper[4865]: I0103 05:13:31.935748 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:31 crc kubenswrapper[4865]: I0103 05:13:31.936358 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:32 crc kubenswrapper[4865]: I0103 05:13:32.009522 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:32 crc kubenswrapper[4865]: I0103 05:13:32.571760 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:32 crc kubenswrapper[4865]: I0103 05:13:32.623227 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:34 crc kubenswrapper[4865]: I0103 05:13:34.532076 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tgcxm" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="registry-server" containerID="cri-o://71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647" gracePeriod=2 Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.063654 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.179833 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content\") pod \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.179905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrbds\" (UniqueName: \"kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds\") pod \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.180084 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities\") pod \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\" (UID: \"bdc1f45f-95cb-4688-a2f1-2320cf640f43\") " Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.181636 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities" (OuterVolumeSpecName: "utilities") pod "bdc1f45f-95cb-4688-a2f1-2320cf640f43" (UID: "bdc1f45f-95cb-4688-a2f1-2320cf640f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.194802 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds" (OuterVolumeSpecName: "kube-api-access-lrbds") pod "bdc1f45f-95cb-4688-a2f1-2320cf640f43" (UID: "bdc1f45f-95cb-4688-a2f1-2320cf640f43"). InnerVolumeSpecName "kube-api-access-lrbds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.208602 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc1f45f-95cb-4688-a2f1-2320cf640f43" (UID: "bdc1f45f-95cb-4688-a2f1-2320cf640f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.285285 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.285338 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrbds\" (UniqueName: \"kubernetes.io/projected/bdc1f45f-95cb-4688-a2f1-2320cf640f43-kube-api-access-lrbds\") on node \"crc\" DevicePath \"\"" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.285369 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc1f45f-95cb-4688-a2f1-2320cf640f43-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.550405 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerID="71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647" exitCode=0 Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.550469 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerDied","Data":"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647"} Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.550887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgcxm" event={"ID":"bdc1f45f-95cb-4688-a2f1-2320cf640f43","Type":"ContainerDied","Data":"a0fec67b02dbe58a2235c7908381862dbc8ecdde9e44af6a148d88b7f62f61b9"} Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.550520 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgcxm" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.550932 4865 scope.go:117] "RemoveContainer" containerID="71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.594052 4865 scope.go:117] "RemoveContainer" containerID="9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.616951 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.627622 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgcxm"] Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.648581 4865 scope.go:117] "RemoveContainer" containerID="027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.703665 4865 scope.go:117] "RemoveContainer" containerID="71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647" Jan 03 05:13:35 crc kubenswrapper[4865]: E0103 05:13:35.704056 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647\": container with ID starting with 71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647 not found: ID does not exist" containerID="71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.704086 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647"} err="failed to get container status \"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647\": rpc error: code = NotFound desc = could not find container \"71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647\": container with ID starting with 71b36b62574c3fd688ca0c56907293e6c251664576e6f5b5625ab98fea566647 not found: ID does not exist" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.704108 4865 scope.go:117] "RemoveContainer" containerID="9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6" Jan 03 05:13:35 crc kubenswrapper[4865]: E0103 05:13:35.704390 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6\": container with ID starting with 9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6 not found: ID does not exist" containerID="9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.704409 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6"} err="failed to get container status \"9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6\": rpc error: code = NotFound desc = could not find container \"9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6\": container with ID starting with 9164b2d7b22ae298390013d49708957a1b45736cf54ef89fd9fedd0ab95753d6 not found: ID does not exist" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.704422 4865 scope.go:117] "RemoveContainer" containerID="027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b" Jan 03 05:13:35 crc kubenswrapper[4865]: E0103 05:13:35.704801 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b\": container with ID starting with 027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b not found: ID does not exist" containerID="027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b" Jan 03 05:13:35 crc kubenswrapper[4865]: I0103 05:13:35.704826 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b"} err="failed to get container status \"027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b\": rpc error: code = NotFound desc = could not find container \"027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b\": container with ID starting with 027817f390ff9a75cccc27ab98aa9401658d84f433450b988d126448e95cba0b not found: ID does not exist" Jan 03 05:13:37 crc kubenswrapper[4865]: I0103 05:13:37.173439 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" path="/var/lib/kubelet/pods/bdc1f45f-95cb-4688-a2f1-2320cf640f43/volumes" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.635353 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:31 crc kubenswrapper[4865]: E0103 05:14:31.637109 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="extract-content" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.637123 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="extract-content" Jan 03 05:14:31 crc kubenswrapper[4865]: E0103 05:14:31.637132 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="registry-server" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.637138 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="registry-server" Jan 03 05:14:31 crc kubenswrapper[4865]: E0103 05:14:31.637154 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="extract-utilities" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.637161 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="extract-utilities" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.637333 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc1f45f-95cb-4688-a2f1-2320cf640f43" containerName="registry-server" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.638541 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.657055 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.744877 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.745020 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbv6l\" (UniqueName: \"kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.745085 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.847142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbv6l\" (UniqueName: \"kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.847213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.847288 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.847721 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.848029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.882252 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbv6l\" (UniqueName: \"kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l\") pod \"community-operators-95689\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:31 crc kubenswrapper[4865]: I0103 05:14:31.978142 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:32 crc kubenswrapper[4865]: I0103 05:14:32.544355 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:33 crc kubenswrapper[4865]: I0103 05:14:33.143008 4865 generic.go:334] "Generic (PLEG): container finished" podID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerID="2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d" exitCode=0 Jan 03 05:14:33 crc kubenswrapper[4865]: I0103 05:14:33.143076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerDied","Data":"2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d"} Jan 03 05:14:33 crc kubenswrapper[4865]: I0103 05:14:33.143298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerStarted","Data":"7583530ef2d19d5b11f28c8e7dfc95c05a053ac301dce1bf26ef4aecfeed756c"} Jan 03 05:14:33 crc kubenswrapper[4865]: I0103 05:14:33.148244 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:14:34 crc kubenswrapper[4865]: I0103 05:14:34.151220 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerStarted","Data":"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2"} Jan 03 05:14:35 crc kubenswrapper[4865]: I0103 05:14:35.163902 4865 generic.go:334] "Generic (PLEG): container finished" podID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerID="73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2" exitCode=0 Jan 03 05:14:35 crc kubenswrapper[4865]: I0103 05:14:35.173247 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerDied","Data":"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2"} Jan 03 05:14:36 crc kubenswrapper[4865]: I0103 05:14:36.172121 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerStarted","Data":"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca"} Jan 03 05:14:36 crc kubenswrapper[4865]: I0103 05:14:36.191742 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95689" podStartSLOduration=2.724282124 podStartE2EDuration="5.191722463s" podCreationTimestamp="2026-01-03 05:14:31 +0000 UTC" firstStartedPulling="2026-01-03 05:14:33.147860753 +0000 UTC m=+3500.264913978" lastFinishedPulling="2026-01-03 05:14:35.615301132 +0000 UTC m=+3502.732354317" observedRunningTime="2026-01-03 05:14:36.189627027 +0000 UTC m=+3503.306680212" watchObservedRunningTime="2026-01-03 05:14:36.191722463 +0000 UTC m=+3503.308775638" Jan 03 05:14:41 crc kubenswrapper[4865]: I0103 05:14:41.979442 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:41 crc kubenswrapper[4865]: I0103 05:14:41.980081 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:42 crc kubenswrapper[4865]: I0103 05:14:42.062827 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:42 crc kubenswrapper[4865]: I0103 05:14:42.291838 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:42 crc kubenswrapper[4865]: I0103 05:14:42.335725 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.252085 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95689" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="registry-server" containerID="cri-o://ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca" gracePeriod=2 Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.756320 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.915016 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities\") pod \"46799387-cbbc-41cf-81d0-4ba17b1d6076\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.915169 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content\") pod \"46799387-cbbc-41cf-81d0-4ba17b1d6076\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.915196 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbv6l\" (UniqueName: \"kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l\") pod \"46799387-cbbc-41cf-81d0-4ba17b1d6076\" (UID: \"46799387-cbbc-41cf-81d0-4ba17b1d6076\") " Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.916265 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities" (OuterVolumeSpecName: "utilities") pod "46799387-cbbc-41cf-81d0-4ba17b1d6076" (UID: "46799387-cbbc-41cf-81d0-4ba17b1d6076"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.922317 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l" (OuterVolumeSpecName: "kube-api-access-dbv6l") pod "46799387-cbbc-41cf-81d0-4ba17b1d6076" (UID: "46799387-cbbc-41cf-81d0-4ba17b1d6076"). InnerVolumeSpecName "kube-api-access-dbv6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:14:44 crc kubenswrapper[4865]: I0103 05:14:44.985626 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46799387-cbbc-41cf-81d0-4ba17b1d6076" (UID: "46799387-cbbc-41cf-81d0-4ba17b1d6076"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.017537 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.017791 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46799387-cbbc-41cf-81d0-4ba17b1d6076-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.017870 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbv6l\" (UniqueName: \"kubernetes.io/projected/46799387-cbbc-41cf-81d0-4ba17b1d6076-kube-api-access-dbv6l\") on node \"crc\" DevicePath \"\"" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.263640 4865 generic.go:334] "Generic (PLEG): container finished" podID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerID="ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca" exitCode=0 Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.263679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerDied","Data":"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca"} Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.263709 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95689" event={"ID":"46799387-cbbc-41cf-81d0-4ba17b1d6076","Type":"ContainerDied","Data":"7583530ef2d19d5b11f28c8e7dfc95c05a053ac301dce1bf26ef4aecfeed756c"} Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.263710 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95689" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.263725 4865 scope.go:117] "RemoveContainer" containerID="ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.288115 4865 scope.go:117] "RemoveContainer" containerID="73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.288808 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.296603 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95689"] Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.316503 4865 scope.go:117] "RemoveContainer" containerID="2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.355692 4865 scope.go:117] "RemoveContainer" containerID="ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca" Jan 03 05:14:45 crc kubenswrapper[4865]: E0103 05:14:45.356223 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca\": container with ID starting with ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca not found: ID does not exist" containerID="ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.356301 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca"} err="failed to get container status \"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca\": rpc error: code = NotFound desc = could not find container \"ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca\": container with ID starting with ce4faa8c50767016b0fa15dc63dbfb8e1e08c2b20f5f8089cb720829685a5fca not found: ID does not exist" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.356362 4865 scope.go:117] "RemoveContainer" containerID="73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2" Jan 03 05:14:45 crc kubenswrapper[4865]: E0103 05:14:45.356864 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2\": container with ID starting with 73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2 not found: ID does not exist" containerID="73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.356908 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2"} err="failed to get container status \"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2\": rpc error: code = NotFound desc = could not find container \"73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2\": container with ID starting with 73068a456c5b7fbee8501d46471b3ceadf4f50878012722d4acf9e8d820beae2 not found: ID does not exist" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.356959 4865 scope.go:117] "RemoveContainer" containerID="2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d" Jan 03 05:14:45 crc kubenswrapper[4865]: E0103 05:14:45.357509 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d\": container with ID starting with 2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d not found: ID does not exist" containerID="2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d" Jan 03 05:14:45 crc kubenswrapper[4865]: I0103 05:14:45.357593 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d"} err="failed to get container status \"2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d\": rpc error: code = NotFound desc = could not find container \"2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d\": container with ID starting with 2bf7af8435ecec98a6e3f87c8809aab62583d9240dbf6e761166d1319a00628d not found: ID does not exist" Jan 03 05:14:47 crc kubenswrapper[4865]: I0103 05:14:47.172998 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" path="/var/lib/kubelet/pods/46799387-cbbc-41cf-81d0-4ba17b1d6076/volumes" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.151492 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v"] Jan 03 05:15:00 crc kubenswrapper[4865]: E0103 05:15:00.152639 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="extract-utilities" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.152662 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="extract-utilities" Jan 03 05:15:00 crc kubenswrapper[4865]: E0103 05:15:00.152699 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="extract-content" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.152714 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="extract-content" Jan 03 05:15:00 crc kubenswrapper[4865]: E0103 05:15:00.152745 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="registry-server" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.152756 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="registry-server" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.153149 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="46799387-cbbc-41cf-81d0-4ba17b1d6076" containerName="registry-server" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.154248 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.160062 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.160255 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.166109 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v"] Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.235214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.235594 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.235749 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58v7h\" (UniqueName: \"kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.341138 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58v7h\" (UniqueName: \"kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.341453 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.341686 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.342449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.362728 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.365748 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58v7h\" (UniqueName: \"kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h\") pod \"collect-profiles-29456955-wzp7v\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.500930 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:00 crc kubenswrapper[4865]: I0103 05:15:00.986776 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v"] Jan 03 05:15:01 crc kubenswrapper[4865]: I0103 05:15:01.442817 4865 generic.go:334] "Generic (PLEG): container finished" podID="bbd50fe9-21b6-46ec-b832-0071be70bfd4" containerID="b6847747f59d512733d405d2c2be47bb82a07a7d904b1252ea6ee2482b25a157" exitCode=0 Jan 03 05:15:01 crc kubenswrapper[4865]: I0103 05:15:01.442882 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" event={"ID":"bbd50fe9-21b6-46ec-b832-0071be70bfd4","Type":"ContainerDied","Data":"b6847747f59d512733d405d2c2be47bb82a07a7d904b1252ea6ee2482b25a157"} Jan 03 05:15:01 crc kubenswrapper[4865]: I0103 05:15:01.442927 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" event={"ID":"bbd50fe9-21b6-46ec-b832-0071be70bfd4","Type":"ContainerStarted","Data":"71f988d0196ca25927a17f1b8047c9a88a94d4f81bde6a20d3bbc74aad6230fd"} Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.898198 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.995601 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58v7h\" (UniqueName: \"kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h\") pod \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.995937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume\") pod \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.996087 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume\") pod \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\" (UID: \"bbd50fe9-21b6-46ec-b832-0071be70bfd4\") " Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.996502 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbd50fe9-21b6-46ec-b832-0071be70bfd4" (UID: "bbd50fe9-21b6-46ec-b832-0071be70bfd4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:15:02 crc kubenswrapper[4865]: I0103 05:15:02.996673 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbd50fe9-21b6-46ec-b832-0071be70bfd4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.005662 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bbd50fe9-21b6-46ec-b832-0071be70bfd4" (UID: "bbd50fe9-21b6-46ec-b832-0071be70bfd4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.008625 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h" (OuterVolumeSpecName: "kube-api-access-58v7h") pod "bbd50fe9-21b6-46ec-b832-0071be70bfd4" (UID: "bbd50fe9-21b6-46ec-b832-0071be70bfd4"). InnerVolumeSpecName "kube-api-access-58v7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.098404 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbd50fe9-21b6-46ec-b832-0071be70bfd4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.098436 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58v7h\" (UniqueName: \"kubernetes.io/projected/bbd50fe9-21b6-46ec-b832-0071be70bfd4-kube-api-access-58v7h\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.466066 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" event={"ID":"bbd50fe9-21b6-46ec-b832-0071be70bfd4","Type":"ContainerDied","Data":"71f988d0196ca25927a17f1b8047c9a88a94d4f81bde6a20d3bbc74aad6230fd"} Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.466123 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456955-wzp7v" Jan 03 05:15:03 crc kubenswrapper[4865]: I0103 05:15:03.466128 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f988d0196ca25927a17f1b8047c9a88a94d4f81bde6a20d3bbc74aad6230fd" Jan 03 05:15:04 crc kubenswrapper[4865]: I0103 05:15:04.016116 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n"] Jan 03 05:15:04 crc kubenswrapper[4865]: I0103 05:15:04.025264 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456910-lzj6n"] Jan 03 05:15:05 crc kubenswrapper[4865]: I0103 05:15:05.177952 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6921c8-ce94-444d-9597-6f73851c6c95" path="/var/lib/kubelet/pods/8d6921c8-ce94-444d-9597-6f73851c6c95/volumes" Jan 03 05:15:34 crc kubenswrapper[4865]: I0103 05:15:34.797559 4865 generic.go:334] "Generic (PLEG): container finished" podID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" containerID="a264d5b271b8dee798e719022af5b48dfeb881b0b615fc7f7c5b2b1454945e1b" exitCode=0 Jan 03 05:15:34 crc kubenswrapper[4865]: I0103 05:15:34.797633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d51a1b58-dba9-4c1f-87bb-bce07ad57852","Type":"ContainerDied","Data":"a264d5b271b8dee798e719022af5b48dfeb881b0b615fc7f7c5b2b1454945e1b"} Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.287215 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436527 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436592 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436658 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436785 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436802 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68sz8\" (UniqueName: \"kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.436927 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key\") pod \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\" (UID: \"d51a1b58-dba9-4c1f-87bb-bce07ad57852\") " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.437442 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data" (OuterVolumeSpecName: "config-data") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.437696 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.441985 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.460649 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.470592 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8" (OuterVolumeSpecName: "kube-api-access-68sz8") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "kube-api-access-68sz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.497532 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.513135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.519527 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.534840 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d51a1b58-dba9-4c1f-87bb-bce07ad57852" (UID: "d51a1b58-dba9-4c1f-87bb-bce07ad57852"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538779 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538824 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538837 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68sz8\" (UniqueName: \"kubernetes.io/projected/d51a1b58-dba9-4c1f-87bb-bce07ad57852-kube-api-access-68sz8\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538848 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538857 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d51a1b58-dba9-4c1f-87bb-bce07ad57852-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538866 4865 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538874 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538883 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d51a1b58-dba9-4c1f-87bb-bce07ad57852-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.538891 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d51a1b58-dba9-4c1f-87bb-bce07ad57852-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.557963 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.641333 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.823965 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d51a1b58-dba9-4c1f-87bb-bce07ad57852","Type":"ContainerDied","Data":"a10d33fdb4a01907a596eda8e6092b449ceee810d5cf00c76bc270a892f4e8fc"} Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.824008 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10d33fdb4a01907a596eda8e6092b449ceee810d5cf00c76bc270a892f4e8fc" Jan 03 05:15:36 crc kubenswrapper[4865]: I0103 05:15:36.824032 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 03 05:15:37 crc kubenswrapper[4865]: I0103 05:15:37.584651 4865 scope.go:117] "RemoveContainer" containerID="d44ab7d2fc18e12f3dfa8fc117444e02d3c2a68d0eb4ac4db8f5a440b2078931" Jan 03 05:15:40 crc kubenswrapper[4865]: I0103 05:15:40.739702 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:15:40 crc kubenswrapper[4865]: I0103 05:15:40.740085 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.823304 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 05:15:48 crc kubenswrapper[4865]: E0103 05:15:48.825285 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" containerName="tempest-tests-tempest-tests-runner" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.825313 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" containerName="tempest-tests-tempest-tests-runner" Jan 03 05:15:48 crc kubenswrapper[4865]: E0103 05:15:48.825357 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd50fe9-21b6-46ec-b832-0071be70bfd4" containerName="collect-profiles" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.825370 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd50fe9-21b6-46ec-b832-0071be70bfd4" containerName="collect-profiles" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.826226 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51a1b58-dba9-4c1f-87bb-bce07ad57852" containerName="tempest-tests-tempest-tests-runner" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.826263 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd50fe9-21b6-46ec-b832-0071be70bfd4" containerName="collect-profiles" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.827744 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.842662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-smm7z" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.862522 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.893675 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.893868 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bswc\" (UniqueName: \"kubernetes.io/projected/627c2ef6-ef77-4f3a-b7e9-a56643ffece7-kube-api-access-8bswc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.995661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.995809 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bswc\" (UniqueName: \"kubernetes.io/projected/627c2ef6-ef77-4f3a-b7e9-a56643ffece7-kube-api-access-8bswc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:48 crc kubenswrapper[4865]: I0103 05:15:48.996131 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:49 crc kubenswrapper[4865]: I0103 05:15:49.021821 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bswc\" (UniqueName: \"kubernetes.io/projected/627c2ef6-ef77-4f3a-b7e9-a56643ffece7-kube-api-access-8bswc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:49 crc kubenswrapper[4865]: I0103 05:15:49.027363 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"627c2ef6-ef77-4f3a-b7e9-a56643ffece7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:49 crc kubenswrapper[4865]: I0103 05:15:49.163050 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 03 05:15:49 crc kubenswrapper[4865]: I0103 05:15:49.661049 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 03 05:15:49 crc kubenswrapper[4865]: I0103 05:15:49.961053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"627c2ef6-ef77-4f3a-b7e9-a56643ffece7","Type":"ContainerStarted","Data":"76ca7b74e05de2314853923208b78e2ed340c64e11061101eb47c12641758762"} Jan 03 05:15:50 crc kubenswrapper[4865]: I0103 05:15:50.975479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"627c2ef6-ef77-4f3a-b7e9-a56643ffece7","Type":"ContainerStarted","Data":"f7811280318ab85bdd53f93b86cd60bcdd1c60eba1354b8bf650c273adc61866"} Jan 03 05:15:51 crc kubenswrapper[4865]: I0103 05:15:51.008865 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.06388366 podStartE2EDuration="3.008835414s" podCreationTimestamp="2026-01-03 05:15:48 +0000 UTC" firstStartedPulling="2026-01-03 05:15:49.670964866 +0000 UTC m=+3576.788018091" lastFinishedPulling="2026-01-03 05:15:50.61591665 +0000 UTC m=+3577.732969845" observedRunningTime="2026-01-03 05:15:50.989959193 +0000 UTC m=+3578.107012418" watchObservedRunningTime="2026-01-03 05:15:51.008835414 +0000 UTC m=+3578.125888639" Jan 03 05:16:10 crc kubenswrapper[4865]: I0103 05:16:10.740527 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:16:10 crc kubenswrapper[4865]: I0103 05:16:10.741181 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.746466 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8tqzj/must-gather-tbjwh"] Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.748212 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.750572 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8tqzj"/"default-dockercfg-ps7qf" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.750586 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8tqzj"/"openshift-service-ca.crt" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.751226 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8tqzj"/"kube-root-ca.crt" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.767872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8tqzj/must-gather-tbjwh"] Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.819917 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.819996 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4c9\" (UniqueName: \"kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.921472 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.921518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4c9\" (UniqueName: \"kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.921911 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:13 crc kubenswrapper[4865]: I0103 05:16:13.938865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4c9\" (UniqueName: \"kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9\") pod \"must-gather-tbjwh\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:14 crc kubenswrapper[4865]: I0103 05:16:14.064842 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:16:14 crc kubenswrapper[4865]: W0103 05:16:14.442636 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4374183_282a_4e98_82de_5a45f98bf733.slice/crio-5be75684cc9cf305bbdc32a03729c39961b4fe76a0992d2937eb3d32db172cb0 WatchSource:0}: Error finding container 5be75684cc9cf305bbdc32a03729c39961b4fe76a0992d2937eb3d32db172cb0: Status 404 returned error can't find the container with id 5be75684cc9cf305bbdc32a03729c39961b4fe76a0992d2937eb3d32db172cb0 Jan 03 05:16:14 crc kubenswrapper[4865]: I0103 05:16:14.443529 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8tqzj/must-gather-tbjwh"] Jan 03 05:16:15 crc kubenswrapper[4865]: I0103 05:16:15.310915 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" event={"ID":"a4374183-282a-4e98-82de-5a45f98bf733","Type":"ContainerStarted","Data":"5be75684cc9cf305bbdc32a03729c39961b4fe76a0992d2937eb3d32db172cb0"} Jan 03 05:16:21 crc kubenswrapper[4865]: I0103 05:16:21.374084 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" event={"ID":"a4374183-282a-4e98-82de-5a45f98bf733","Type":"ContainerStarted","Data":"16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5"} Jan 03 05:16:21 crc kubenswrapper[4865]: I0103 05:16:21.374545 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" event={"ID":"a4374183-282a-4e98-82de-5a45f98bf733","Type":"ContainerStarted","Data":"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3"} Jan 03 05:16:21 crc kubenswrapper[4865]: I0103 05:16:21.396968 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" podStartSLOduration=2.414197921 podStartE2EDuration="8.3969507s" podCreationTimestamp="2026-01-03 05:16:13 +0000 UTC" firstStartedPulling="2026-01-03 05:16:14.444905558 +0000 UTC m=+3601.561958733" lastFinishedPulling="2026-01-03 05:16:20.427658327 +0000 UTC m=+3607.544711512" observedRunningTime="2026-01-03 05:16:21.389095167 +0000 UTC m=+3608.506148352" watchObservedRunningTime="2026-01-03 05:16:21.3969507 +0000 UTC m=+3608.514003885" Jan 03 05:16:23 crc kubenswrapper[4865]: I0103 05:16:23.979995 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-whghf"] Jan 03 05:16:23 crc kubenswrapper[4865]: I0103 05:16:23.985684 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.094546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.094822 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk6wg\" (UniqueName: \"kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.197363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk6wg\" (UniqueName: \"kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.197491 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.197768 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.219341 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk6wg\" (UniqueName: \"kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg\") pod \"crc-debug-whghf\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.310991 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:16:24 crc kubenswrapper[4865]: W0103 05:16:24.339243 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3bbb4b9_a601_46f6_b55c_2decb1b9eab4.slice/crio-94330319b24bbc7b22fb427eb0cd088f5e300b59fee946a0cdfc29ef87c741fb WatchSource:0}: Error finding container 94330319b24bbc7b22fb427eb0cd088f5e300b59fee946a0cdfc29ef87c741fb: Status 404 returned error can't find the container with id 94330319b24bbc7b22fb427eb0cd088f5e300b59fee946a0cdfc29ef87c741fb Jan 03 05:16:24 crc kubenswrapper[4865]: I0103 05:16:24.399866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-whghf" event={"ID":"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4","Type":"ContainerStarted","Data":"94330319b24bbc7b22fb427eb0cd088f5e300b59fee946a0cdfc29ef87c741fb"} Jan 03 05:16:35 crc kubenswrapper[4865]: I0103 05:16:35.492466 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-whghf" event={"ID":"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4","Type":"ContainerStarted","Data":"a1fb73c709b4e0b637452b9b2afe5fa34bda711a52653c2121c9dfc3130583af"} Jan 03 05:16:35 crc kubenswrapper[4865]: I0103 05:16:35.514957 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8tqzj/crc-debug-whghf" podStartSLOduration=1.6964994519999999 podStartE2EDuration="12.514936095s" podCreationTimestamp="2026-01-03 05:16:23 +0000 UTC" firstStartedPulling="2026-01-03 05:16:24.341077102 +0000 UTC m=+3611.458130297" lastFinishedPulling="2026-01-03 05:16:35.159513755 +0000 UTC m=+3622.276566940" observedRunningTime="2026-01-03 05:16:35.504731009 +0000 UTC m=+3622.621784204" watchObservedRunningTime="2026-01-03 05:16:35.514936095 +0000 UTC m=+3622.631989290" Jan 03 05:16:40 crc kubenswrapper[4865]: I0103 05:16:40.740001 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:16:40 crc kubenswrapper[4865]: I0103 05:16:40.740692 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:16:40 crc kubenswrapper[4865]: I0103 05:16:40.740760 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:16:40 crc kubenswrapper[4865]: I0103 05:16:40.741643 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:16:40 crc kubenswrapper[4865]: I0103 05:16:40.741707 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd" gracePeriod=600 Jan 03 05:16:42 crc kubenswrapper[4865]: I0103 05:16:42.560127 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd" exitCode=0 Jan 03 05:16:42 crc kubenswrapper[4865]: I0103 05:16:42.560762 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd"} Jan 03 05:16:42 crc kubenswrapper[4865]: I0103 05:16:42.560819 4865 scope.go:117] "RemoveContainer" containerID="1d4294bdbbfc450b5596551b159f6739ff5bec7467e8e40c8a9d062182d3b085" Jan 03 05:16:43 crc kubenswrapper[4865]: I0103 05:16:43.571809 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8"} Jan 03 05:17:11 crc kubenswrapper[4865]: I0103 05:17:11.868617 4865 generic.go:334] "Generic (PLEG): container finished" podID="e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" containerID="a1fb73c709b4e0b637452b9b2afe5fa34bda711a52653c2121c9dfc3130583af" exitCode=0 Jan 03 05:17:11 crc kubenswrapper[4865]: I0103 05:17:11.868766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-whghf" event={"ID":"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4","Type":"ContainerDied","Data":"a1fb73c709b4e0b637452b9b2afe5fa34bda711a52653c2121c9dfc3130583af"} Jan 03 05:17:12 crc kubenswrapper[4865]: I0103 05:17:12.968344 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.006872 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-whghf"] Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.015705 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-whghf"] Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.163914 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk6wg\" (UniqueName: \"kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg\") pod \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.164010 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host\") pod \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\" (UID: \"e3bbb4b9-a601-46f6-b55c-2decb1b9eab4\") " Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.164530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host" (OuterVolumeSpecName: "host") pod "e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" (UID: "e3bbb4b9-a601-46f6-b55c-2decb1b9eab4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.171320 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg" (OuterVolumeSpecName: "kube-api-access-mk6wg") pod "e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" (UID: "e3bbb4b9-a601-46f6-b55c-2decb1b9eab4"). InnerVolumeSpecName "kube-api-access-mk6wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.266186 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk6wg\" (UniqueName: \"kubernetes.io/projected/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-kube-api-access-mk6wg\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.266237 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.891031 4865 scope.go:117] "RemoveContainer" containerID="a1fb73c709b4e0b637452b9b2afe5fa34bda711a52653c2121c9dfc3130583af" Jan 03 05:17:13 crc kubenswrapper[4865]: I0103 05:17:13.891093 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-whghf" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.218418 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-pnt47"] Jan 03 05:17:14 crc kubenswrapper[4865]: E0103 05:17:14.218900 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" containerName="container-00" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.218919 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" containerName="container-00" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.219245 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" containerName="container-00" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.220028 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.390560 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xm85\" (UniqueName: \"kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.390657 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.492867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xm85\" (UniqueName: \"kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.492950 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.493077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.514282 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xm85\" (UniqueName: \"kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85\") pod \"crc-debug-pnt47\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.539673 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:14 crc kubenswrapper[4865]: W0103 05:17:14.588291 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2d780d_b956_46a2_b4c5_25d9a72ab88b.slice/crio-919427f03a884ae7c09810a179ab3e245935ad9917c8d58923ade968e7698003 WatchSource:0}: Error finding container 919427f03a884ae7c09810a179ab3e245935ad9917c8d58923ade968e7698003: Status 404 returned error can't find the container with id 919427f03a884ae7c09810a179ab3e245935ad9917c8d58923ade968e7698003 Jan 03 05:17:14 crc kubenswrapper[4865]: I0103 05:17:14.901300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" event={"ID":"1c2d780d-b956-46a2-b4c5-25d9a72ab88b","Type":"ContainerStarted","Data":"919427f03a884ae7c09810a179ab3e245935ad9917c8d58923ade968e7698003"} Jan 03 05:17:15 crc kubenswrapper[4865]: I0103 05:17:15.169470 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bbb4b9-a601-46f6-b55c-2decb1b9eab4" path="/var/lib/kubelet/pods/e3bbb4b9-a601-46f6-b55c-2decb1b9eab4/volumes" Jan 03 05:17:15 crc kubenswrapper[4865]: E0103 05:17:15.194833 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2d780d_b956_46a2_b4c5_25d9a72ab88b.slice/crio-3c1831537cf98a086455f1778f9e36fd266ec3139f70444a7102550cbf3fb169.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2d780d_b956_46a2_b4c5_25d9a72ab88b.slice/crio-conmon-3c1831537cf98a086455f1778f9e36fd266ec3139f70444a7102550cbf3fb169.scope\": RecentStats: unable to find data in memory cache]" Jan 03 05:17:15 crc kubenswrapper[4865]: I0103 05:17:15.921226 4865 generic.go:334] "Generic (PLEG): container finished" podID="1c2d780d-b956-46a2-b4c5-25d9a72ab88b" containerID="3c1831537cf98a086455f1778f9e36fd266ec3139f70444a7102550cbf3fb169" exitCode=0 Jan 03 05:17:15 crc kubenswrapper[4865]: I0103 05:17:15.921494 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" event={"ID":"1c2d780d-b956-46a2-b4c5-25d9a72ab88b","Type":"ContainerDied","Data":"3c1831537cf98a086455f1778f9e36fd266ec3139f70444a7102550cbf3fb169"} Jan 03 05:17:16 crc kubenswrapper[4865]: I0103 05:17:16.539127 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-pnt47"] Jan 03 05:17:16 crc kubenswrapper[4865]: I0103 05:17:16.545935 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-pnt47"] Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.058246 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.246853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xm85\" (UniqueName: \"kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85\") pod \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.247070 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host\") pod \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\" (UID: \"1c2d780d-b956-46a2-b4c5-25d9a72ab88b\") " Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.247202 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host" (OuterVolumeSpecName: "host") pod "1c2d780d-b956-46a2-b4c5-25d9a72ab88b" (UID: "1c2d780d-b956-46a2-b4c5-25d9a72ab88b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.248847 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.264628 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85" (OuterVolumeSpecName: "kube-api-access-5xm85") pod "1c2d780d-b956-46a2-b4c5-25d9a72ab88b" (UID: "1c2d780d-b956-46a2-b4c5-25d9a72ab88b"). InnerVolumeSpecName "kube-api-access-5xm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.352561 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xm85\" (UniqueName: \"kubernetes.io/projected/1c2d780d-b956-46a2-b4c5-25d9a72ab88b-kube-api-access-5xm85\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.769793 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-wkd4c"] Jan 03 05:17:17 crc kubenswrapper[4865]: E0103 05:17:17.770256 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2d780d-b956-46a2-b4c5-25d9a72ab88b" containerName="container-00" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.770280 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2d780d-b956-46a2-b4c5-25d9a72ab88b" containerName="container-00" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.770573 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2d780d-b956-46a2-b4c5-25d9a72ab88b" containerName="container-00" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.771339 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.945621 4865 scope.go:117] "RemoveContainer" containerID="3c1831537cf98a086455f1778f9e36fd266ec3139f70444a7102550cbf3fb169" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.945697 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-pnt47" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.965812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttflz\" (UniqueName: \"kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:17 crc kubenswrapper[4865]: I0103 05:17:17.966261 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.067980 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttflz\" (UniqueName: \"kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.068158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.068329 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.088516 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttflz\" (UniqueName: \"kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz\") pod \"crc-debug-wkd4c\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.091464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.974504 4865 generic.go:334] "Generic (PLEG): container finished" podID="3a3c7d34-fefb-40e8-a82c-b9b965f1958e" containerID="67b544ed5d7a69c94ab0bc5bafa11d206dfbb0e3e16249dd646bce37a1bb721f" exitCode=0 Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.974613 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" event={"ID":"3a3c7d34-fefb-40e8-a82c-b9b965f1958e","Type":"ContainerDied","Data":"67b544ed5d7a69c94ab0bc5bafa11d206dfbb0e3e16249dd646bce37a1bb721f"} Jan 03 05:17:18 crc kubenswrapper[4865]: I0103 05:17:18.974872 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" event={"ID":"3a3c7d34-fefb-40e8-a82c-b9b965f1958e","Type":"ContainerStarted","Data":"55c8445da48dc9ddf9357150117e5c883b4705047eeaeebbafed68d041a99f07"} Jan 03 05:17:19 crc kubenswrapper[4865]: I0103 05:17:19.019115 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-wkd4c"] Jan 03 05:17:19 crc kubenswrapper[4865]: I0103 05:17:19.029772 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8tqzj/crc-debug-wkd4c"] Jan 03 05:17:19 crc kubenswrapper[4865]: I0103 05:17:19.173607 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2d780d-b956-46a2-b4c5-25d9a72ab88b" path="/var/lib/kubelet/pods/1c2d780d-b956-46a2-b4c5-25d9a72ab88b/volumes" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.103140 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.205467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host\") pod \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.205600 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host" (OuterVolumeSpecName: "host") pod "3a3c7d34-fefb-40e8-a82c-b9b965f1958e" (UID: "3a3c7d34-fefb-40e8-a82c-b9b965f1958e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.205725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttflz\" (UniqueName: \"kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz\") pod \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\" (UID: \"3a3c7d34-fefb-40e8-a82c-b9b965f1958e\") " Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.206191 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.215577 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz" (OuterVolumeSpecName: "kube-api-access-ttflz") pod "3a3c7d34-fefb-40e8-a82c-b9b965f1958e" (UID: "3a3c7d34-fefb-40e8-a82c-b9b965f1958e"). InnerVolumeSpecName "kube-api-access-ttflz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.308580 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttflz\" (UniqueName: \"kubernetes.io/projected/3a3c7d34-fefb-40e8-a82c-b9b965f1958e-kube-api-access-ttflz\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.992644 4865 scope.go:117] "RemoveContainer" containerID="67b544ed5d7a69c94ab0bc5bafa11d206dfbb0e3e16249dd646bce37a1bb721f" Jan 03 05:17:20 crc kubenswrapper[4865]: I0103 05:17:20.992755 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/crc-debug-wkd4c" Jan 03 05:17:21 crc kubenswrapper[4865]: I0103 05:17:21.168673 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3c7d34-fefb-40e8-a82c-b9b965f1958e" path="/var/lib/kubelet/pods/3a3c7d34-fefb-40e8-a82c-b9b965f1958e/volumes" Jan 03 05:17:33 crc kubenswrapper[4865]: I0103 05:17:33.820785 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5974c4f958-8d8f6_9890e74a-cb62-411d-8cf0-ce88ffcc73e0/barbican-api/0.log" Jan 03 05:17:33 crc kubenswrapper[4865]: I0103 05:17:33.994574 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5974c4f958-8d8f6_9890e74a-cb62-411d-8cf0-ce88ffcc73e0/barbican-api-log/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.038887 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-588ffb9974-8pr57_e6d49d7a-9faf-486d-a98d-4067f581c56c/barbican-keystone-listener/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.133532 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-588ffb9974-8pr57_e6d49d7a-9faf-486d-a98d-4067f581c56c/barbican-keystone-listener-log/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.199551 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b6fdb99ff-s5qqm_d53a478c-ba6a-4210-b219-66540ed365c6/barbican-worker/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.233996 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b6fdb99ff-s5qqm_d53a478c-ba6a-4210-b219-66540ed365c6/barbican-worker-log/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.369864 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v_32245d9a-04a2-4ee3-99ae-6c876313c5a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.470820 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/ceilometer-central-agent/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.551397 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/proxy-httpd/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.594832 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/ceilometer-notification-agent/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.627060 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/sg-core/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.778151 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_74a06fab-e04b-4eca-b4b1-a9d69b526c1d/cinder-api-log/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.780755 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_74a06fab-e04b-4eca-b4b1-a9d69b526c1d/cinder-api/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.888008 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0bfb3310-1647-4ce9-887c-ccff650d42c5/cinder-scheduler/0.log" Jan 03 05:17:34 crc kubenswrapper[4865]: I0103 05:17:34.960184 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0bfb3310-1647-4ce9-887c-ccff650d42c5/probe/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.040671 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk_1811dd2a-9abd-466c-8c53-992d887c9321/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.129202 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4_65c55653-7592-4a07-bfc2-c6273437c99c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.220097 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/init/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.391516 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/init/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.441231 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/dnsmasq-dns/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.463728 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz_012b0a73-ea86-4b62-aad3-f6b4f63a32bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.656777 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_879b9de0-9d7c-46dc-b9b1-80c16cbebaa0/glance-log/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.667345 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_879b9de0-9d7c-46dc-b9b1-80c16cbebaa0/glance-httpd/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.816928 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_75b65689-ffa6-4b7c-b6c2-2f8e48f4a333/glance-log/0.log" Jan 03 05:17:35 crc kubenswrapper[4865]: I0103 05:17:35.838834 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_75b65689-ffa6-4b7c-b6c2-2f8e48f4a333/glance-httpd/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.038126 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c8ff89456-njqfs_565fcd3f-e73a-446a-b862-717cfb106bd1/horizon/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.156340 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x_c777a6c5-214d-40e9-b948-0e8d7a872578/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.315015 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6wxhc_bee03f5c-0eca-42a3-9d5d-ea38f06a775b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.392803 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c8ff89456-njqfs_565fcd3f-e73a-446a-b862-717cfb106bd1/horizon-log/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.578082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29456941-p58zp_74c76b1f-c632-4f93-add1-5d8150f79004/keystone-cron/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.669463 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b8664f56d-q48t7_d707525c-50ad-4b99-b59c-177bcae86c4c/keystone-api/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.814644 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5668e757-efa2-4bac-a269-6e2cdd9dbfef/kube-state-metrics/0.log" Jan 03 05:17:36 crc kubenswrapper[4865]: I0103 05:17:36.854745 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl_11b63132-1f33-4f08-9ddd-b705cc52d950/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.196063 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b946bd96f-ph9x6_a52305ce-1bb8-4ff4-9d6b-0cf652186e17/neutron-api/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.245539 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b946bd96f-ph9x6_a52305ce-1bb8-4ff4-9d6b-0cf652186e17/neutron-httpd/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.397636 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd_649b14b9-86dc-4aa5-9086-8a90038e573f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.805344 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_673e4191-53ee-4b5d-8bbe-289693bab15d/nova-api-log/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.849851 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a5fca133-2a81-47d7-8f20-62c55d10c3e6/nova-cell0-conductor-conductor/0.log" Jan 03 05:17:37 crc kubenswrapper[4865]: I0103 05:17:37.985995 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_673e4191-53ee-4b5d-8bbe-289693bab15d/nova-api-api/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.122718 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0/nova-cell1-conductor-conductor/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.157232 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_02276ef3-b599-4ad0-be9e-690430084e13/nova-cell1-novncproxy-novncproxy/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.482205 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-7kv5x_0d3ac9c6-cfbf-4614-abb7-9a4338b90aab/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.570182 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_db9bcaee-003e-4fb7-b1b6-477c6583c4cc/nova-metadata-log/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.925653 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/mysql-bootstrap/0.log" Jan 03 05:17:38 crc kubenswrapper[4865]: I0103 05:17:38.986435 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_874d8744-eb6f-46f5-a6b7-35348b4f9359/nova-scheduler-scheduler/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.090780 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/mysql-bootstrap/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.129804 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/galera/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.277158 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/mysql-bootstrap/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.530767 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/galera/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.546294 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/mysql-bootstrap/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.730891 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a3ac055f-a850-4676-8bc2-0cd50509ff30/openstackclient/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.810556 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gpwwp_334ea42d-9265-43f9-8c4c-fdf516746069/ovn-controller/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.883880 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_db9bcaee-003e-4fb7-b1b6-477c6583c4cc/nova-metadata-metadata/0.log" Jan 03 05:17:39 crc kubenswrapper[4865]: I0103 05:17:39.987296 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qv9sm_c7da9b69-55d5-43a2-8e3c-2a25ca513ce6/openstack-network-exporter/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.076278 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server-init/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.243522 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server-init/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.277685 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.336963 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovs-vswitchd/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.465919 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hc56k_306b5e02-b107-4d9b-9d6e-66c1d4a5ed11/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.524771 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5bec493c-4a1f-49db-b9f3-d05bffd3541b/openstack-network-exporter/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.614919 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5bec493c-4a1f-49db-b9f3-d05bffd3541b/ovn-northd/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.663800 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:40 crc kubenswrapper[4865]: E0103 05:17:40.664194 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c7d34-fefb-40e8-a82c-b9b965f1958e" containerName="container-00" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.664204 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c7d34-fefb-40e8-a82c-b9b965f1958e" containerName="container-00" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.664369 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3c7d34-fefb-40e8-a82c-b9b965f1958e" containerName="container-00" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.665675 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.688978 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.761146 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4b281b80-3b3a-4c04-a904-669d66ec4a74/ovsdbserver-nb/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.779516 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4b281b80-3b3a-4c04-a904-669d66ec4a74/openstack-network-exporter/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.800743 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.803205 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.803255 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpk7t\" (UniqueName: \"kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.904676 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.904723 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.904749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpk7t\" (UniqueName: \"kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.905153 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.905210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.912759 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9d25819-b14f-411e-a158-b9f315cf13d6/openstack-network-exporter/0.log" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.926070 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpk7t\" (UniqueName: \"kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t\") pod \"certified-operators-sxffl\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:40 crc kubenswrapper[4865]: I0103 05:17:40.998533 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.001131 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9d25819-b14f-411e-a158-b9f315cf13d6/ovsdbserver-sb/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.298778 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.304546 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-677444457b-ftr4x_f1045fbc-a935-4634-a207-aa8b027c9768/placement-api/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.461928 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-677444457b-ftr4x_f1045fbc-a935-4634-a207-aa8b027c9768/placement-log/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.579673 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/setup-container/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.797646 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/setup-container/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.832756 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/rabbitmq/0.log" Jan 03 05:17:41 crc kubenswrapper[4865]: I0103 05:17:41.850304 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/setup-container/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.035694 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/setup-container/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.067446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/rabbitmq/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.150164 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p_f4adfc92-6b31-4832-9159-7ec2b85b018f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.188074 4865 generic.go:334] "Generic (PLEG): container finished" podID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerID="b1b58e7549d56c114d811fad155c4fd19e8a134e85df72c7bfec3d8df060334e" exitCode=0 Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.188124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerDied","Data":"b1b58e7549d56c114d811fad155c4fd19e8a134e85df72c7bfec3d8df060334e"} Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.188159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerStarted","Data":"679aac37d48d498bdc32cc2a8cf13bf6e82b9a4a9634c17d64d6f5105e7b4834"} Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.312782 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vdnjr_76548eb3-2e5a-4325-85c3-3dac91f58d9b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.345437 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g_5f26a495-d92f-42c6-9395-d4cb6e0037f5/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.592044 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gm6fp_f635f6c7-e6b9-49f1-ba28-59fd66a1c425/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.632923 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lsx5q_4ae9c175-8601-467f-8f66-220277a0ffe1/ssh-known-hosts-edpm-deployment/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.835574 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5496856655-kc92p_638adfee-76ef-47db-bd03-1dbffb050ac8/proxy-server/0.log" Jan 03 05:17:42 crc kubenswrapper[4865]: I0103 05:17:42.956001 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5496856655-kc92p_638adfee-76ef-47db-bd03-1dbffb050ac8/proxy-httpd/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.010948 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x464g_6baae859-c56d-42e9-a3da-1e883afc3047/swift-ring-rebalance/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.153730 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-auditor/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.204960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerStarted","Data":"8dc941d1f9bad8ca8c7b4b0706936a3dd81b08a1fa0f1d2bd61f23b6a0b489ad"} Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.222978 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-reaper/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.249677 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-replicator/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.326786 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-server/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.351178 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-auditor/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.460813 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-replicator/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.469954 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-server/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.504334 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-updater/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.567920 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-auditor/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.656481 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-expirer/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.680520 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-server/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.728803 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-replicator/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.780726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-updater/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.866808 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/swift-recon-cron/0.log" Jan 03 05:17:43 crc kubenswrapper[4865]: I0103 05:17:43.883471 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/rsync/0.log" Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.063453 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj_842e7570-e53d-4a45-91cf-d37579440783/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.157204 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d51a1b58-dba9-4c1f-87bb-bce07ad57852/tempest-tests-tempest-tests-runner/0.log" Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.214031 4865 generic.go:334] "Generic (PLEG): container finished" podID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerID="8dc941d1f9bad8ca8c7b4b0706936a3dd81b08a1fa0f1d2bd61f23b6a0b489ad" exitCode=0 Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.214067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerDied","Data":"8dc941d1f9bad8ca8c7b4b0706936a3dd81b08a1fa0f1d2bd61f23b6a0b489ad"} Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.233855 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_627c2ef6-ef77-4f3a-b7e9-a56643ffece7/test-operator-logs-container/0.log" Jan 03 05:17:44 crc kubenswrapper[4865]: I0103 05:17:44.399767 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w_81ff7929-70dd-400a-ae25-8e7425e5a9ae/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:17:45 crc kubenswrapper[4865]: I0103 05:17:45.241086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerStarted","Data":"3ef6b0e19ebc6c13fb6f3eb9090bac6fab985db683f74447811f2089aa9fd96f"} Jan 03 05:17:45 crc kubenswrapper[4865]: I0103 05:17:45.261662 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sxffl" podStartSLOduration=2.8197237680000002 podStartE2EDuration="5.261635477s" podCreationTimestamp="2026-01-03 05:17:40 +0000 UTC" firstStartedPulling="2026-01-03 05:17:42.189919083 +0000 UTC m=+3689.306972268" lastFinishedPulling="2026-01-03 05:17:44.631830792 +0000 UTC m=+3691.748883977" observedRunningTime="2026-01-03 05:17:45.259978903 +0000 UTC m=+3692.377032088" watchObservedRunningTime="2026-01-03 05:17:45.261635477 +0000 UTC m=+3692.378688662" Jan 03 05:17:50 crc kubenswrapper[4865]: I0103 05:17:50.999173 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:51 crc kubenswrapper[4865]: I0103 05:17:50.999746 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:51 crc kubenswrapper[4865]: I0103 05:17:51.039170 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:51 crc kubenswrapper[4865]: I0103 05:17:51.336102 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:51 crc kubenswrapper[4865]: I0103 05:17:51.382677 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:51 crc kubenswrapper[4865]: I0103 05:17:51.515395 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_31864768-e1b4-438d-b88d-a5a8f9e89e5e/memcached/0.log" Jan 03 05:17:53 crc kubenswrapper[4865]: I0103 05:17:53.310094 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sxffl" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="registry-server" containerID="cri-o://3ef6b0e19ebc6c13fb6f3eb9090bac6fab985db683f74447811f2089aa9fd96f" gracePeriod=2 Jan 03 05:17:54 crc kubenswrapper[4865]: I0103 05:17:54.318831 4865 generic.go:334] "Generic (PLEG): container finished" podID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerID="3ef6b0e19ebc6c13fb6f3eb9090bac6fab985db683f74447811f2089aa9fd96f" exitCode=0 Jan 03 05:17:54 crc kubenswrapper[4865]: I0103 05:17:54.318889 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerDied","Data":"3ef6b0e19ebc6c13fb6f3eb9090bac6fab985db683f74447811f2089aa9fd96f"} Jan 03 05:17:54 crc kubenswrapper[4865]: I0103 05:17:54.906985 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.025357 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content\") pod \"d136b9a3-e0b2-4821-af8d-b59678d776cf\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.025429 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities\") pod \"d136b9a3-e0b2-4821-af8d-b59678d776cf\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.025558 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpk7t\" (UniqueName: \"kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t\") pod \"d136b9a3-e0b2-4821-af8d-b59678d776cf\" (UID: \"d136b9a3-e0b2-4821-af8d-b59678d776cf\") " Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.026300 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities" (OuterVolumeSpecName: "utilities") pod "d136b9a3-e0b2-4821-af8d-b59678d776cf" (UID: "d136b9a3-e0b2-4821-af8d-b59678d776cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.036503 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t" (OuterVolumeSpecName: "kube-api-access-cpk7t") pod "d136b9a3-e0b2-4821-af8d-b59678d776cf" (UID: "d136b9a3-e0b2-4821-af8d-b59678d776cf"). InnerVolumeSpecName "kube-api-access-cpk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.071657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d136b9a3-e0b2-4821-af8d-b59678d776cf" (UID: "d136b9a3-e0b2-4821-af8d-b59678d776cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.127402 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.127686 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d136b9a3-e0b2-4821-af8d-b59678d776cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.127699 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpk7t\" (UniqueName: \"kubernetes.io/projected/d136b9a3-e0b2-4821-af8d-b59678d776cf-kube-api-access-cpk7t\") on node \"crc\" DevicePath \"\"" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.328929 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxffl" event={"ID":"d136b9a3-e0b2-4821-af8d-b59678d776cf","Type":"ContainerDied","Data":"679aac37d48d498bdc32cc2a8cf13bf6e82b9a4a9634c17d64d6f5105e7b4834"} Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.328985 4865 scope.go:117] "RemoveContainer" containerID="3ef6b0e19ebc6c13fb6f3eb9090bac6fab985db683f74447811f2089aa9fd96f" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.328982 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxffl" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.356502 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.363342 4865 scope.go:117] "RemoveContainer" containerID="8dc941d1f9bad8ca8c7b4b0706936a3dd81b08a1fa0f1d2bd61f23b6a0b489ad" Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.365042 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sxffl"] Jan 03 05:17:55 crc kubenswrapper[4865]: I0103 05:17:55.393417 4865 scope.go:117] "RemoveContainer" containerID="b1b58e7549d56c114d811fad155c4fd19e8a134e85df72c7bfec3d8df060334e" Jan 03 05:17:57 crc kubenswrapper[4865]: I0103 05:17:57.173673 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" path="/var/lib/kubelet/pods/d136b9a3-e0b2-4821-af8d-b59678d776cf/volumes" Jan 03 05:18:08 crc kubenswrapper[4865]: I0103 05:18:08.862345 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.022880 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.028230 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.070092 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.209416 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.235421 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/extract/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.250244 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.435620 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-8p5g8_675aef60-25dd-4113-a4cf-2f9b91a21150/manager/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.455747 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-pk2cc_e4095c6a-c9c9-42c0-b79e-a4f467563d27/manager/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.631267 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-sxcmb_13c73aaf-30a7-4530-afff-39ec069fccde/manager/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.712719 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-2lcpm_45a74d9c-8e20-4f90-ad8b-8e139ad592fd/manager/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.783544 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-xk5t7_8ed18047-e002-419d-b950-2535d4d778c1/manager/0.log" Jan 03 05:18:09 crc kubenswrapper[4865]: I0103 05:18:09.895485 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-wlwkg_78091396-35cf-4a65-878b-02705fd27e09/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.088243 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-2cfvd_1e7c8270-346b-429d-a775-abb648245a40/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.211311 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-648996cf74-xqj6p_ce7f03b5-6280-4cd8-b3a6-865329b1b9ce/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.289315 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-95md4_31214035-ff7b-4c07-87b8-52a98b09cd52/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.289640 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-d6psw_0adfe2b3-9c76-4213-a856-e834ff2b24e0/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.457894 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-vp89g_c4804e80-40f4-4f53-abfb-cafc1299f889/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.559100 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-dt8d6_dc3a99b3-36d9-41bc-94f7-74b47980f602/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.719485 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-fzx6g_76d489d2-17da-4af8-8fc5-d8ce6451a45c/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.757698 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-8jc6g_1785ab82-9c1c-41c0-aa07-0285dd49b221/manager/0.log" Jan 03 05:18:10 crc kubenswrapper[4865]: I0103 05:18:10.912810 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq_aae3d614-123b-48a1-81fa-84f2c04b3923/manager/0.log" Jan 03 05:18:11 crc kubenswrapper[4865]: I0103 05:18:11.235265 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-m27bl_d69f6b83-1ca1-48a6-b701-033533fe63d0/registry-server/0.log" Jan 03 05:18:11 crc kubenswrapper[4865]: I0103 05:18:11.300069 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5954d5f7bc-c9qjh_b69ab3cd-b729-4ca3-83c2-989f7660d826/operator/0.log" Jan 03 05:18:11 crc kubenswrapper[4865]: I0103 05:18:11.481614 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gsncl_a179b7cc-8be0-4956-83ad-7b8b8087103b/manager/0.log" Jan 03 05:18:11 crc kubenswrapper[4865]: I0103 05:18:11.636651 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-zfntt_500fd3bd-494f-428c-9437-a71add6116d6/manager/0.log" Jan 03 05:18:11 crc kubenswrapper[4865]: I0103 05:18:11.815234 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8zvh8_ebc5bbec-4b11-47c8-a018-ddefda88a53b/operator/0.log" Jan 03 05:18:12 crc kubenswrapper[4865]: I0103 05:18:12.040268 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-9hsk5_ac1ae731-f2d2-436a-b3ef-641ebf79814d/manager/0.log" Jan 03 05:18:12 crc kubenswrapper[4865]: I0103 05:18:12.045171 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54cff86f68-wmvwl_fe531b80-38a8-4a91-95c3-cd9ffe4dee91/manager/0.log" Jan 03 05:18:12 crc kubenswrapper[4865]: I0103 05:18:12.217543 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-kgjdb_56f034ca-a5ed-4b5b-89ca-82ff95662601/manager/0.log" Jan 03 05:18:12 crc kubenswrapper[4865]: I0103 05:18:12.268412 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-fggfw_6af3b14c-24a5-4cf7-8cad-9583e2eb0b40/manager/0.log" Jan 03 05:18:12 crc kubenswrapper[4865]: I0103 05:18:12.709332 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-cshl4_8006ff7b-528f-4750-ba59-5aaacd35649b/manager/0.log" Jan 03 05:18:31 crc kubenswrapper[4865]: I0103 05:18:31.627626 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mftcp_472b09fa-6397-442e-bd28-40d3dc0aff44/control-plane-machine-set-operator/0.log" Jan 03 05:18:31 crc kubenswrapper[4865]: I0103 05:18:31.775241 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s7cr9_af7bc8fa-5059-413a-b03b-8a95d39f786c/kube-rbac-proxy/0.log" Jan 03 05:18:31 crc kubenswrapper[4865]: I0103 05:18:31.816800 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s7cr9_af7bc8fa-5059-413a-b03b-8a95d39f786c/machine-api-operator/0.log" Jan 03 05:18:45 crc kubenswrapper[4865]: I0103 05:18:45.015324 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wklxs_e0da0830-24dd-48df-8f23-a1338aff9d50/cert-manager-controller/0.log" Jan 03 05:18:45 crc kubenswrapper[4865]: I0103 05:18:45.194212 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jzwg8_8b1175af-0486-4c47-8135-1b968223783e/cert-manager-cainjector/0.log" Jan 03 05:18:45 crc kubenswrapper[4865]: I0103 05:18:45.253980 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jtp7f_f44bcedc-d643-4020-ac35-8777348583ef/cert-manager-webhook/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.334264 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-s4wgm_751c8c50-a8f2-456e-95d9-40d6e80de893/nmstate-console-plugin/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.561618 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hvkxm_5d65eeb2-6f33-4e5c-8470-f654c785e04f/nmstate-handler/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.594753 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-7c2qx_386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e/kube-rbac-proxy/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.767000 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-7c2qx_386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e/nmstate-metrics/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.770167 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-hbxcc_d09a2e94-e2b5-4780-9269-564415d6627a/nmstate-operator/0.log" Jan 03 05:18:59 crc kubenswrapper[4865]: I0103 05:18:59.950805 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-r6w2b_44d7b71d-b005-4033-8bda-db39169f98a4/nmstate-webhook/0.log" Jan 03 05:19:10 crc kubenswrapper[4865]: I0103 05:19:10.739532 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:19:10 crc kubenswrapper[4865]: I0103 05:19:10.740422 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.573531 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g62v9_6f9c0210-2934-4cc6-aec8-f91055a4e30d/kube-rbac-proxy/0.log" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.684074 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g62v9_6f9c0210-2934-4cc6-aec8-f91055a4e30d/controller/0.log" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.807902 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.970022 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.977564 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:19:17 crc kubenswrapper[4865]: I0103 05:19:17.982967 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.011785 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.212296 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.232462 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.237822 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.272319 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.461677 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.477993 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.502026 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/controller/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.508649 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.668392 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/frr-metrics/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.735453 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/kube-rbac-proxy/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.760764 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/kube-rbac-proxy-frr/0.log" Jan 03 05:19:18 crc kubenswrapper[4865]: I0103 05:19:18.904771 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/reloader/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.011152 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-88j8j_78972b37-6300-455d-8b5d-7a2dbefa88f3/frr-k8s-webhook-server/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.150418 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8c8d45d7-vlx4d_ca1c3cce-5140-4f1a-bd20-5d1111357543/manager/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.276524 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-9b69d945b-jqfc9_e1b7d25b-22b1-46fd-98e2-8f5de4dfac93/webhook-server/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.427900 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhgwc_3ea65f95-887b-447b-b582-c1e91cdf44eb/kube-rbac-proxy/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.844840 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/frr/0.log" Jan 03 05:19:19 crc kubenswrapper[4865]: I0103 05:19:19.964249 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhgwc_3ea65f95-887b-447b-b582-c1e91cdf44eb/speaker/0.log" Jan 03 05:19:33 crc kubenswrapper[4865]: I0103 05:19:33.891135 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.203821 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.206569 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.236655 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.471428 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.483073 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.497595 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/extract/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.707726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.845720 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.855007 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:19:34 crc kubenswrapper[4865]: I0103 05:19:34.873633 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.046565 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/extract/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.047152 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.061825 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.236728 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.371185 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.394286 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.421049 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.586183 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:19:35 crc kubenswrapper[4865]: I0103 05:19:35.612380 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.038672 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.138165 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/registry-server/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.150944 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.178901 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.250743 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.348503 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.376619 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.674474 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-86vxz_e0d2175d-f167-4a1f-a14e-df5e69557228/marketplace-operator/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.695446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.877570 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/registry-server/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.914207 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.925070 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:19:36 crc kubenswrapper[4865]: I0103 05:19:36.926348 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.077787 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.103891 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.235660 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/registry-server/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.309552 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.457214 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.465457 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.511635 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.610038 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:19:37 crc kubenswrapper[4865]: I0103 05:19:37.623240 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:19:38 crc kubenswrapper[4865]: I0103 05:19:38.172897 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/registry-server/0.log" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.405215 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:19:39 crc kubenswrapper[4865]: E0103 05:19:39.405884 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="extract-content" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.405897 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="extract-content" Jan 03 05:19:39 crc kubenswrapper[4865]: E0103 05:19:39.405930 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="extract-utilities" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.405937 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="extract-utilities" Jan 03 05:19:39 crc kubenswrapper[4865]: E0103 05:19:39.405948 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="registry-server" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.405976 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="registry-server" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.406198 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d136b9a3-e0b2-4821-af8d-b59678d776cf" containerName="registry-server" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.407425 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.423164 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.462245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnpjg\" (UniqueName: \"kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.462375 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.462642 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.564788 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.564848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnpjg\" (UniqueName: \"kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.564928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.565312 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.565354 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.587075 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnpjg\" (UniqueName: \"kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg\") pod \"redhat-operators-68m5q\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:39 crc kubenswrapper[4865]: I0103 05:19:39.733442 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:40 crc kubenswrapper[4865]: I0103 05:19:40.062235 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:19:40 crc kubenswrapper[4865]: I0103 05:19:40.501333 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerStarted","Data":"cf22271b971e571ae3a280c5cb0f2e0ef9de8e52630c2f69853f27ef064ddb28"} Jan 03 05:19:40 crc kubenswrapper[4865]: I0103 05:19:40.740134 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:19:40 crc kubenswrapper[4865]: I0103 05:19:40.740559 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:19:41 crc kubenswrapper[4865]: I0103 05:19:41.525834 4865 generic.go:334] "Generic (PLEG): container finished" podID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerID="e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2" exitCode=0 Jan 03 05:19:41 crc kubenswrapper[4865]: I0103 05:19:41.525899 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerDied","Data":"e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2"} Jan 03 05:19:41 crc kubenswrapper[4865]: I0103 05:19:41.536887 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:19:42 crc kubenswrapper[4865]: I0103 05:19:42.536875 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerStarted","Data":"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0"} Jan 03 05:19:43 crc kubenswrapper[4865]: I0103 05:19:43.548864 4865 generic.go:334] "Generic (PLEG): container finished" podID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerID="42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0" exitCode=0 Jan 03 05:19:43 crc kubenswrapper[4865]: I0103 05:19:43.548921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerDied","Data":"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0"} Jan 03 05:19:44 crc kubenswrapper[4865]: I0103 05:19:44.558835 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerStarted","Data":"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6"} Jan 03 05:19:44 crc kubenswrapper[4865]: I0103 05:19:44.586364 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68m5q" podStartSLOduration=3.13971154 podStartE2EDuration="5.586336011s" podCreationTimestamp="2026-01-03 05:19:39 +0000 UTC" firstStartedPulling="2026-01-03 05:19:41.536610779 +0000 UTC m=+3808.653663974" lastFinishedPulling="2026-01-03 05:19:43.98323526 +0000 UTC m=+3811.100288445" observedRunningTime="2026-01-03 05:19:44.582183569 +0000 UTC m=+3811.699236784" watchObservedRunningTime="2026-01-03 05:19:44.586336011 +0000 UTC m=+3811.703389206" Jan 03 05:19:49 crc kubenswrapper[4865]: I0103 05:19:49.734075 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:49 crc kubenswrapper[4865]: I0103 05:19:49.734750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:50 crc kubenswrapper[4865]: I0103 05:19:50.791021 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68m5q" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="registry-server" probeResult="failure" output=< Jan 03 05:19:50 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 05:19:50 crc kubenswrapper[4865]: > Jan 03 05:19:59 crc kubenswrapper[4865]: I0103 05:19:59.796179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:19:59 crc kubenswrapper[4865]: I0103 05:19:59.843008 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:20:00 crc kubenswrapper[4865]: I0103 05:20:00.888112 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:20:01 crc kubenswrapper[4865]: I0103 05:20:01.728668 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68m5q" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="registry-server" containerID="cri-o://ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6" gracePeriod=2 Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.295938 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.444063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities\") pod \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.444207 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content\") pod \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.444245 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnpjg\" (UniqueName: \"kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg\") pod \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\" (UID: \"2b15bb69-d0dd-40e0-9d87-f85ab01e7739\") " Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.444876 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities" (OuterVolumeSpecName: "utilities") pod "2b15bb69-d0dd-40e0-9d87-f85ab01e7739" (UID: "2b15bb69-d0dd-40e0-9d87-f85ab01e7739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.447573 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.455536 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg" (OuterVolumeSpecName: "kube-api-access-rnpjg") pod "2b15bb69-d0dd-40e0-9d87-f85ab01e7739" (UID: "2b15bb69-d0dd-40e0-9d87-f85ab01e7739"). InnerVolumeSpecName "kube-api-access-rnpjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.549089 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnpjg\" (UniqueName: \"kubernetes.io/projected/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-kube-api-access-rnpjg\") on node \"crc\" DevicePath \"\"" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.569832 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b15bb69-d0dd-40e0-9d87-f85ab01e7739" (UID: "2b15bb69-d0dd-40e0-9d87-f85ab01e7739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.652255 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b15bb69-d0dd-40e0-9d87-f85ab01e7739-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.736750 4865 generic.go:334] "Generic (PLEG): container finished" podID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerID="ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6" exitCode=0 Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.736790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerDied","Data":"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6"} Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.736816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68m5q" event={"ID":"2b15bb69-d0dd-40e0-9d87-f85ab01e7739","Type":"ContainerDied","Data":"cf22271b971e571ae3a280c5cb0f2e0ef9de8e52630c2f69853f27ef064ddb28"} Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.736833 4865 scope.go:117] "RemoveContainer" containerID="ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.736836 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68m5q" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.754121 4865 scope.go:117] "RemoveContainer" containerID="42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.771962 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.779756 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68m5q"] Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.780263 4865 scope.go:117] "RemoveContainer" containerID="e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.825704 4865 scope.go:117] "RemoveContainer" containerID="ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6" Jan 03 05:20:02 crc kubenswrapper[4865]: E0103 05:20:02.826048 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6\": container with ID starting with ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6 not found: ID does not exist" containerID="ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.826091 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6"} err="failed to get container status \"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6\": rpc error: code = NotFound desc = could not find container \"ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6\": container with ID starting with ed06904ed5696c1ee20225d5bdb17e83ac2fe08bfa9e24cac3a153eb6e55ace6 not found: ID does not exist" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.826118 4865 scope.go:117] "RemoveContainer" containerID="42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0" Jan 03 05:20:02 crc kubenswrapper[4865]: E0103 05:20:02.826401 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0\": container with ID starting with 42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0 not found: ID does not exist" containerID="42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.826441 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0"} err="failed to get container status \"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0\": rpc error: code = NotFound desc = could not find container \"42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0\": container with ID starting with 42b29b150b1ef3679b8a20d20b5c1216d2d73c3d082d345bd3aaa516b85aeab0 not found: ID does not exist" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.826462 4865 scope.go:117] "RemoveContainer" containerID="e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2" Jan 03 05:20:02 crc kubenswrapper[4865]: E0103 05:20:02.826708 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2\": container with ID starting with e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2 not found: ID does not exist" containerID="e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2" Jan 03 05:20:02 crc kubenswrapper[4865]: I0103 05:20:02.826759 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2"} err="failed to get container status \"e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2\": rpc error: code = NotFound desc = could not find container \"e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2\": container with ID starting with e38fdb99b4decd5ff95d9be7fd7892e60fa3d7b67d9781f1ae510d0ba7e8d4f2 not found: ID does not exist" Jan 03 05:20:03 crc kubenswrapper[4865]: I0103 05:20:03.176887 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" path="/var/lib/kubelet/pods/2b15bb69-d0dd-40e0-9d87-f85ab01e7739/volumes" Jan 03 05:20:09 crc kubenswrapper[4865]: E0103 05:20:09.798803 4865 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:40748->38.102.83.196:38213: write tcp 38.102.83.196:40748->38.102.83.196:38213: write: broken pipe Jan 03 05:20:10 crc kubenswrapper[4865]: I0103 05:20:10.740129 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:20:10 crc kubenswrapper[4865]: I0103 05:20:10.740456 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:20:10 crc kubenswrapper[4865]: I0103 05:20:10.740510 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:20:10 crc kubenswrapper[4865]: I0103 05:20:10.741304 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:20:10 crc kubenswrapper[4865]: I0103 05:20:10.741365 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" gracePeriod=600 Jan 03 05:20:10 crc kubenswrapper[4865]: E0103 05:20:10.870663 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:20:11 crc kubenswrapper[4865]: I0103 05:20:11.812189 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" exitCode=0 Jan 03 05:20:11 crc kubenswrapper[4865]: I0103 05:20:11.812269 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8"} Jan 03 05:20:11 crc kubenswrapper[4865]: I0103 05:20:11.812461 4865 scope.go:117] "RemoveContainer" containerID="a25e79429347d8e01a1796e4ca821fe5a7c3853f5c499d29a26c8104caca8dcd" Jan 03 05:20:11 crc kubenswrapper[4865]: I0103 05:20:11.813256 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:20:11 crc kubenswrapper[4865]: E0103 05:20:11.813649 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:20:26 crc kubenswrapper[4865]: I0103 05:20:26.156139 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:20:26 crc kubenswrapper[4865]: E0103 05:20:26.157146 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:20:39 crc kubenswrapper[4865]: I0103 05:20:39.156182 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:20:39 crc kubenswrapper[4865]: E0103 05:20:39.157426 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:20:52 crc kubenswrapper[4865]: I0103 05:20:52.155854 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:20:52 crc kubenswrapper[4865]: E0103 05:20:52.156573 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:21:04 crc kubenswrapper[4865]: I0103 05:21:04.155841 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:21:04 crc kubenswrapper[4865]: E0103 05:21:04.158747 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:21:16 crc kubenswrapper[4865]: I0103 05:21:16.657963 4865 generic.go:334] "Generic (PLEG): container finished" podID="a4374183-282a-4e98-82de-5a45f98bf733" containerID="303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3" exitCode=0 Jan 03 05:21:16 crc kubenswrapper[4865]: I0103 05:21:16.658114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" event={"ID":"a4374183-282a-4e98-82de-5a45f98bf733","Type":"ContainerDied","Data":"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3"} Jan 03 05:21:16 crc kubenswrapper[4865]: I0103 05:21:16.659111 4865 scope.go:117] "RemoveContainer" containerID="303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3" Jan 03 05:21:17 crc kubenswrapper[4865]: I0103 05:21:17.037154 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8tqzj_must-gather-tbjwh_a4374183-282a-4e98-82de-5a45f98bf733/gather/0.log" Jan 03 05:21:19 crc kubenswrapper[4865]: I0103 05:21:19.156142 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:21:19 crc kubenswrapper[4865]: E0103 05:21:19.156985 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.244602 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8tqzj/must-gather-tbjwh"] Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.245924 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="copy" containerID="cri-o://16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5" gracePeriod=2 Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.255416 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8tqzj/must-gather-tbjwh"] Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.706354 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8tqzj_must-gather-tbjwh_a4374183-282a-4e98-82de-5a45f98bf733/copy/0.log" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.707017 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.752423 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8tqzj_must-gather-tbjwh_a4374183-282a-4e98-82de-5a45f98bf733/copy/0.log" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.752897 4865 generic.go:334] "Generic (PLEG): container finished" podID="a4374183-282a-4e98-82de-5a45f98bf733" containerID="16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5" exitCode=143 Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.752949 4865 scope.go:117] "RemoveContainer" containerID="16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.752991 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8tqzj/must-gather-tbjwh" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.774524 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp4c9\" (UniqueName: \"kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9\") pod \"a4374183-282a-4e98-82de-5a45f98bf733\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.774598 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output\") pod \"a4374183-282a-4e98-82de-5a45f98bf733\" (UID: \"a4374183-282a-4e98-82de-5a45f98bf733\") " Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.782174 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9" (OuterVolumeSpecName: "kube-api-access-lp4c9") pod "a4374183-282a-4e98-82de-5a45f98bf733" (UID: "a4374183-282a-4e98-82de-5a45f98bf733"). InnerVolumeSpecName "kube-api-access-lp4c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.784084 4865 scope.go:117] "RemoveContainer" containerID="303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.877159 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp4c9\" (UniqueName: \"kubernetes.io/projected/a4374183-282a-4e98-82de-5a45f98bf733-kube-api-access-lp4c9\") on node \"crc\" DevicePath \"\"" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.889738 4865 scope.go:117] "RemoveContainer" containerID="16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5" Jan 03 05:21:25 crc kubenswrapper[4865]: E0103 05:21:25.890235 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5\": container with ID starting with 16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5 not found: ID does not exist" containerID="16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.890271 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5"} err="failed to get container status \"16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5\": rpc error: code = NotFound desc = could not find container \"16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5\": container with ID starting with 16cd26c3c096d18d20e6db704601903a2e65eec4ac3bf12e0234059281f81fb5 not found: ID does not exist" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.890296 4865 scope.go:117] "RemoveContainer" containerID="303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3" Jan 03 05:21:25 crc kubenswrapper[4865]: E0103 05:21:25.890803 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3\": container with ID starting with 303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3 not found: ID does not exist" containerID="303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.890830 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3"} err="failed to get container status \"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3\": rpc error: code = NotFound desc = could not find container \"303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3\": container with ID starting with 303c64a89629634956d6429a8d439cfc272c8a242fe0f1d85d6c4d30253111e3 not found: ID does not exist" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.920280 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a4374183-282a-4e98-82de-5a45f98bf733" (UID: "a4374183-282a-4e98-82de-5a45f98bf733"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:21:25 crc kubenswrapper[4865]: I0103 05:21:25.979204 4865 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4374183-282a-4e98-82de-5a45f98bf733-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 03 05:21:27 crc kubenswrapper[4865]: I0103 05:21:27.178668 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4374183-282a-4e98-82de-5a45f98bf733" path="/var/lib/kubelet/pods/a4374183-282a-4e98-82de-5a45f98bf733/volumes" Jan 03 05:21:34 crc kubenswrapper[4865]: I0103 05:21:34.155793 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:21:34 crc kubenswrapper[4865]: E0103 05:21:34.156848 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:21:46 crc kubenswrapper[4865]: I0103 05:21:46.155956 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:21:46 crc kubenswrapper[4865]: E0103 05:21:46.156881 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:21:58 crc kubenswrapper[4865]: I0103 05:21:58.161665 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:21:58 crc kubenswrapper[4865]: E0103 05:21:58.163163 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:22:11 crc kubenswrapper[4865]: I0103 05:22:11.156016 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:22:11 crc kubenswrapper[4865]: E0103 05:22:11.156919 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:22:25 crc kubenswrapper[4865]: I0103 05:22:25.157059 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:22:25 crc kubenswrapper[4865]: E0103 05:22:25.158267 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:22:37 crc kubenswrapper[4865]: I0103 05:22:37.156905 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:22:37 crc kubenswrapper[4865]: E0103 05:22:37.158298 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:22:49 crc kubenswrapper[4865]: I0103 05:22:49.156093 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:22:49 crc kubenswrapper[4865]: E0103 05:22:49.156898 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:23:01 crc kubenswrapper[4865]: I0103 05:23:01.156615 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:23:01 crc kubenswrapper[4865]: E0103 05:23:01.157604 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:23:14 crc kubenswrapper[4865]: I0103 05:23:14.156123 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:23:14 crc kubenswrapper[4865]: E0103 05:23:14.157032 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:23:27 crc kubenswrapper[4865]: I0103 05:23:27.158631 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:23:27 crc kubenswrapper[4865]: E0103 05:23:27.161407 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:23:42 crc kubenswrapper[4865]: I0103 05:23:42.156479 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:23:42 crc kubenswrapper[4865]: E0103 05:23:42.158861 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:23:56 crc kubenswrapper[4865]: I0103 05:23:56.177146 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:23:56 crc kubenswrapper[4865]: E0103 05:23:56.177993 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:07 crc kubenswrapper[4865]: I0103 05:24:07.155691 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:24:07 crc kubenswrapper[4865]: E0103 05:24:07.156485 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.155367 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.156514 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.976778 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p2wlz/must-gather-jbtrp"] Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.977188 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="registry-server" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977207 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="registry-server" Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.977228 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="extract-content" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977237 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="extract-content" Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.977260 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="extract-utilities" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977268 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="extract-utilities" Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.977280 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="copy" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977287 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="copy" Jan 03 05:24:18 crc kubenswrapper[4865]: E0103 05:24:18.977302 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="gather" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977309 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="gather" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977522 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="gather" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977544 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b15bb69-d0dd-40e0-9d87-f85ab01e7739" containerName="registry-server" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.977555 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4374183-282a-4e98-82de-5a45f98bf733" containerName="copy" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.978586 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.983079 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p2wlz"/"kube-root-ca.crt" Jan 03 05:24:18 crc kubenswrapper[4865]: I0103 05:24:18.995718 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p2wlz"/"openshift-service-ca.crt" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.004914 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p2wlz/must-gather-jbtrp"] Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.100276 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.100329 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sqjw\" (UniqueName: \"kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.201978 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.202410 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sqjw\" (UniqueName: \"kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.203187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.220561 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sqjw\" (UniqueName: \"kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw\") pod \"must-gather-jbtrp\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.310659 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:24:19 crc kubenswrapper[4865]: I0103 05:24:19.781502 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p2wlz/must-gather-jbtrp"] Jan 03 05:24:20 crc kubenswrapper[4865]: I0103 05:24:20.596967 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" event={"ID":"6903dcf3-34bd-4744-8d41-115dc0e5c1e6","Type":"ContainerStarted","Data":"b3f2be84c9963c8d4242c66cd8edf8d25342c85169956b6db900d955b63d3b26"} Jan 03 05:24:20 crc kubenswrapper[4865]: I0103 05:24:20.597254 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" event={"ID":"6903dcf3-34bd-4744-8d41-115dc0e5c1e6","Type":"ContainerStarted","Data":"ef18b966f38813877d73838343a6adb12b82604d15024def44342ed538e88caa"} Jan 03 05:24:20 crc kubenswrapper[4865]: I0103 05:24:20.597268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" event={"ID":"6903dcf3-34bd-4744-8d41-115dc0e5c1e6","Type":"ContainerStarted","Data":"27169b5fbca16199471cf061f6e40991f321aefd29d2e51a0672a0f5e6bd2291"} Jan 03 05:24:20 crc kubenswrapper[4865]: I0103 05:24:20.625007 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" podStartSLOduration=2.624978601 podStartE2EDuration="2.624978601s" podCreationTimestamp="2026-01-03 05:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:24:20.619892464 +0000 UTC m=+4087.736945689" watchObservedRunningTime="2026-01-03 05:24:20.624978601 +0000 UTC m=+4087.742031796" Jan 03 05:24:23 crc kubenswrapper[4865]: I0103 05:24:23.907676 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-bkq5v"] Jan 03 05:24:23 crc kubenswrapper[4865]: I0103 05:24:23.909546 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:23 crc kubenswrapper[4865]: I0103 05:24:23.911495 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p2wlz"/"default-dockercfg-sg86p" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.090940 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.091014 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rbj\" (UniqueName: \"kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.192369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.192463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2rbj\" (UniqueName: \"kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.192995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.221685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2rbj\" (UniqueName: \"kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj\") pod \"crc-debug-bkq5v\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.230932 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.643024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" event={"ID":"8bef799b-4727-4d7f-b401-286342950e94","Type":"ContainerStarted","Data":"33392842e427b1e6e95057e3226dac74eab5e10ff93abe42c57075eb2e3efdfb"} Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.643521 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" event={"ID":"8bef799b-4727-4d7f-b401-286342950e94","Type":"ContainerStarted","Data":"d3275ccdb160e79fcf984674d882c7d49691f301fae1e8a4820e10eadf0876e1"} Jan 03 05:24:24 crc kubenswrapper[4865]: I0103 05:24:24.662669 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" podStartSLOduration=1.662649958 podStartE2EDuration="1.662649958s" podCreationTimestamp="2026-01-03 05:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:24:24.657508859 +0000 UTC m=+4091.774562044" watchObservedRunningTime="2026-01-03 05:24:24.662649958 +0000 UTC m=+4091.779703153" Jan 03 05:24:32 crc kubenswrapper[4865]: I0103 05:24:32.156491 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:24:32 crc kubenswrapper[4865]: E0103 05:24:32.157371 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:47 crc kubenswrapper[4865]: I0103 05:24:47.157248 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:24:47 crc kubenswrapper[4865]: E0103 05:24:47.157900 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:56 crc kubenswrapper[4865]: E0103 05:24:56.654958 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bef799b_4727_4d7f_b401_286342950e94.slice/crio-conmon-33392842e427b1e6e95057e3226dac74eab5e10ff93abe42c57075eb2e3efdfb.scope\": RecentStats: unable to find data in memory cache]" Jan 03 05:24:56 crc kubenswrapper[4865]: I0103 05:24:56.928373 4865 generic.go:334] "Generic (PLEG): container finished" podID="8bef799b-4727-4d7f-b401-286342950e94" containerID="33392842e427b1e6e95057e3226dac74eab5e10ff93abe42c57075eb2e3efdfb" exitCode=0 Jan 03 05:24:56 crc kubenswrapper[4865]: I0103 05:24:56.928424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" event={"ID":"8bef799b-4727-4d7f-b401-286342950e94","Type":"ContainerDied","Data":"33392842e427b1e6e95057e3226dac74eab5e10ff93abe42c57075eb2e3efdfb"} Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.164858 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.203767 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-bkq5v"] Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.213989 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-bkq5v"] Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.264640 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host\") pod \"8bef799b-4727-4d7f-b401-286342950e94\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.264782 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2rbj\" (UniqueName: \"kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj\") pod \"8bef799b-4727-4d7f-b401-286342950e94\" (UID: \"8bef799b-4727-4d7f-b401-286342950e94\") " Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.265711 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host" (OuterVolumeSpecName: "host") pod "8bef799b-4727-4d7f-b401-286342950e94" (UID: "8bef799b-4727-4d7f-b401-286342950e94"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.271579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj" (OuterVolumeSpecName: "kube-api-access-h2rbj") pod "8bef799b-4727-4d7f-b401-286342950e94" (UID: "8bef799b-4727-4d7f-b401-286342950e94"). InnerVolumeSpecName "kube-api-access-h2rbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.366701 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2rbj\" (UniqueName: \"kubernetes.io/projected/8bef799b-4727-4d7f-b401-286342950e94-kube-api-access-h2rbj\") on node \"crc\" DevicePath \"\"" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.366737 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bef799b-4727-4d7f-b401-286342950e94-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.946357 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3275ccdb160e79fcf984674d882c7d49691f301fae1e8a4820e10eadf0876e1" Jan 03 05:24:58 crc kubenswrapper[4865]: I0103 05:24:58.946419 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-bkq5v" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.156725 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:24:59 crc kubenswrapper[4865]: E0103 05:24:59.157026 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.168579 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bef799b-4727-4d7f-b401-286342950e94" path="/var/lib/kubelet/pods/8bef799b-4727-4d7f-b401-286342950e94/volumes" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.454700 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-hg8sx"] Jan 03 05:24:59 crc kubenswrapper[4865]: E0103 05:24:59.455121 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bef799b-4727-4d7f-b401-286342950e94" containerName="container-00" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.455133 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bef799b-4727-4d7f-b401-286342950e94" containerName="container-00" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.455299 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bef799b-4727-4d7f-b401-286342950e94" containerName="container-00" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.456035 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.458199 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p2wlz"/"default-dockercfg-sg86p" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.586289 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wdh\" (UniqueName: \"kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.586460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.690440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.690646 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7wdh\" (UniqueName: \"kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.691215 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.711367 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7wdh\" (UniqueName: \"kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh\") pod \"crc-debug-hg8sx\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.776313 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:24:59 crc kubenswrapper[4865]: W0103 05:24:59.814286 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28c5830c_41a3_49bd_827c_6ae26995b6db.slice/crio-588f195e83bcc44fd2c49f536660612f27ddb69d3378f4720a43f5531f39f15c WatchSource:0}: Error finding container 588f195e83bcc44fd2c49f536660612f27ddb69d3378f4720a43f5531f39f15c: Status 404 returned error can't find the container with id 588f195e83bcc44fd2c49f536660612f27ddb69d3378f4720a43f5531f39f15c Jan 03 05:24:59 crc kubenswrapper[4865]: I0103 05:24:59.954733 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" event={"ID":"28c5830c-41a3-49bd-827c-6ae26995b6db","Type":"ContainerStarted","Data":"588f195e83bcc44fd2c49f536660612f27ddb69d3378f4720a43f5531f39f15c"} Jan 03 05:25:00 crc kubenswrapper[4865]: I0103 05:25:00.963002 4865 generic.go:334] "Generic (PLEG): container finished" podID="28c5830c-41a3-49bd-827c-6ae26995b6db" containerID="8fddbede74aa29f7885c3e264650c3f2ceb4195755013d67456f0d660e30cc70" exitCode=0 Jan 03 05:25:00 crc kubenswrapper[4865]: I0103 05:25:00.963096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" event={"ID":"28c5830c-41a3-49bd-827c-6ae26995b6db","Type":"ContainerDied","Data":"8fddbede74aa29f7885c3e264650c3f2ceb4195755013d67456f0d660e30cc70"} Jan 03 05:25:01 crc kubenswrapper[4865]: I0103 05:25:01.369644 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-hg8sx"] Jan 03 05:25:01 crc kubenswrapper[4865]: I0103 05:25:01.377082 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-hg8sx"] Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.077868 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.236602 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7wdh\" (UniqueName: \"kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh\") pod \"28c5830c-41a3-49bd-827c-6ae26995b6db\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.236912 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host\") pod \"28c5830c-41a3-49bd-827c-6ae26995b6db\" (UID: \"28c5830c-41a3-49bd-827c-6ae26995b6db\") " Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.237362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host" (OuterVolumeSpecName: "host") pod "28c5830c-41a3-49bd-827c-6ae26995b6db" (UID: "28c5830c-41a3-49bd-827c-6ae26995b6db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.249358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh" (OuterVolumeSpecName: "kube-api-access-l7wdh") pod "28c5830c-41a3-49bd-827c-6ae26995b6db" (UID: "28c5830c-41a3-49bd-827c-6ae26995b6db"). InnerVolumeSpecName "kube-api-access-l7wdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.339442 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7wdh\" (UniqueName: \"kubernetes.io/projected/28c5830c-41a3-49bd-827c-6ae26995b6db-kube-api-access-l7wdh\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.339486 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28c5830c-41a3-49bd-827c-6ae26995b6db-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.568810 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-b5p8c"] Jan 03 05:25:02 crc kubenswrapper[4865]: E0103 05:25:02.569265 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c5830c-41a3-49bd-827c-6ae26995b6db" containerName="container-00" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.569301 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c5830c-41a3-49bd-827c-6ae26995b6db" containerName="container-00" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.569856 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c5830c-41a3-49bd-827c-6ae26995b6db" containerName="container-00" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.570620 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.643315 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.643360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtj6p\" (UniqueName: \"kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.745160 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.745208 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtj6p\" (UniqueName: \"kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.745322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.760276 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtj6p\" (UniqueName: \"kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p\") pod \"crc-debug-b5p8c\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.940664 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:02 crc kubenswrapper[4865]: W0103 05:25:02.965981 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30553342_ac52_4963_af0e_94c3a9d271bc.slice/crio-bf605ab05c0437e62385700f028c90b801225d08fabcf5fdfe050b94c2bb9078 WatchSource:0}: Error finding container bf605ab05c0437e62385700f028c90b801225d08fabcf5fdfe050b94c2bb9078: Status 404 returned error can't find the container with id bf605ab05c0437e62385700f028c90b801225d08fabcf5fdfe050b94c2bb9078 Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.977676 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" event={"ID":"30553342-ac52-4963-af0e-94c3a9d271bc","Type":"ContainerStarted","Data":"bf605ab05c0437e62385700f028c90b801225d08fabcf5fdfe050b94c2bb9078"} Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.979271 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588f195e83bcc44fd2c49f536660612f27ddb69d3378f4720a43f5531f39f15c" Jan 03 05:25:02 crc kubenswrapper[4865]: I0103 05:25:02.979360 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-hg8sx" Jan 03 05:25:03 crc kubenswrapper[4865]: I0103 05:25:03.167798 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c5830c-41a3-49bd-827c-6ae26995b6db" path="/var/lib/kubelet/pods/28c5830c-41a3-49bd-827c-6ae26995b6db/volumes" Jan 03 05:25:03 crc kubenswrapper[4865]: I0103 05:25:03.988174 4865 generic.go:334] "Generic (PLEG): container finished" podID="30553342-ac52-4963-af0e-94c3a9d271bc" containerID="2ad27cde3ef346fa8fa0fff6b429311fb5c2762860df34b148afdf695f470212" exitCode=0 Jan 03 05:25:03 crc kubenswrapper[4865]: I0103 05:25:03.988225 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" event={"ID":"30553342-ac52-4963-af0e-94c3a9d271bc","Type":"ContainerDied","Data":"2ad27cde3ef346fa8fa0fff6b429311fb5c2762860df34b148afdf695f470212"} Jan 03 05:25:04 crc kubenswrapper[4865]: I0103 05:25:04.023880 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-b5p8c"] Jan 03 05:25:04 crc kubenswrapper[4865]: I0103 05:25:04.030680 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p2wlz/crc-debug-b5p8c"] Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.110047 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.190676 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtj6p\" (UniqueName: \"kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p\") pod \"30553342-ac52-4963-af0e-94c3a9d271bc\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.190809 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host\") pod \"30553342-ac52-4963-af0e-94c3a9d271bc\" (UID: \"30553342-ac52-4963-af0e-94c3a9d271bc\") " Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.192501 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host" (OuterVolumeSpecName: "host") pod "30553342-ac52-4963-af0e-94c3a9d271bc" (UID: "30553342-ac52-4963-af0e-94c3a9d271bc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.218300 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p" (OuterVolumeSpecName: "kube-api-access-dtj6p") pod "30553342-ac52-4963-af0e-94c3a9d271bc" (UID: "30553342-ac52-4963-af0e-94c3a9d271bc"). InnerVolumeSpecName "kube-api-access-dtj6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.294246 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtj6p\" (UniqueName: \"kubernetes.io/projected/30553342-ac52-4963-af0e-94c3a9d271bc-kube-api-access-dtj6p\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:05 crc kubenswrapper[4865]: I0103 05:25:05.294287 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30553342-ac52-4963-af0e-94c3a9d271bc-host\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:06 crc kubenswrapper[4865]: I0103 05:25:06.010794 4865 scope.go:117] "RemoveContainer" containerID="2ad27cde3ef346fa8fa0fff6b429311fb5c2762860df34b148afdf695f470212" Jan 03 05:25:06 crc kubenswrapper[4865]: I0103 05:25:06.010848 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/crc-debug-b5p8c" Jan 03 05:25:07 crc kubenswrapper[4865]: I0103 05:25:07.171568 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30553342-ac52-4963-af0e-94c3a9d271bc" path="/var/lib/kubelet/pods/30553342-ac52-4963-af0e-94c3a9d271bc/volumes" Jan 03 05:25:14 crc kubenswrapper[4865]: I0103 05:25:14.155991 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:25:15 crc kubenswrapper[4865]: I0103 05:25:15.104962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c"} Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.667242 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:22 crc kubenswrapper[4865]: E0103 05:25:22.674270 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30553342-ac52-4963-af0e-94c3a9d271bc" containerName="container-00" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.674395 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="30553342-ac52-4963-af0e-94c3a9d271bc" containerName="container-00" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.674640 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="30553342-ac52-4963-af0e-94c3a9d271bc" containerName="container-00" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.676298 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.685868 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.778094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854bk\" (UniqueName: \"kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.778854 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.778976 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.881057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.881111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.881160 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-854bk\" (UniqueName: \"kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.881671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.881694 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.909924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-854bk\" (UniqueName: \"kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk\") pod \"community-operators-8zhkd\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:22 crc kubenswrapper[4865]: I0103 05:25:22.995656 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:23 crc kubenswrapper[4865]: I0103 05:25:23.516439 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:24 crc kubenswrapper[4865]: I0103 05:25:24.194197 4865 generic.go:334] "Generic (PLEG): container finished" podID="614e7364-962b-479a-9307-6f62dab5821e" containerID="1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2" exitCode=0 Jan 03 05:25:24 crc kubenswrapper[4865]: I0103 05:25:24.194574 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerDied","Data":"1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2"} Jan 03 05:25:24 crc kubenswrapper[4865]: I0103 05:25:24.194625 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerStarted","Data":"fcf9b7562c2df2ee8d81bbdf48632c4edc8bf89b9dfcec654f6e2ca8e7aef57d"} Jan 03 05:25:24 crc kubenswrapper[4865]: I0103 05:25:24.197197 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:25:26 crc kubenswrapper[4865]: I0103 05:25:26.213522 4865 generic.go:334] "Generic (PLEG): container finished" podID="614e7364-962b-479a-9307-6f62dab5821e" containerID="7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5" exitCode=0 Jan 03 05:25:26 crc kubenswrapper[4865]: I0103 05:25:26.213604 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerDied","Data":"7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5"} Jan 03 05:25:27 crc kubenswrapper[4865]: I0103 05:25:27.226711 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerStarted","Data":"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12"} Jan 03 05:25:27 crc kubenswrapper[4865]: I0103 05:25:27.252552 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zhkd" podStartSLOduration=2.8100627080000002 podStartE2EDuration="5.252526956s" podCreationTimestamp="2026-01-03 05:25:22 +0000 UTC" firstStartedPulling="2026-01-03 05:25:24.196563386 +0000 UTC m=+4151.313616571" lastFinishedPulling="2026-01-03 05:25:26.639027634 +0000 UTC m=+4153.756080819" observedRunningTime="2026-01-03 05:25:27.249256988 +0000 UTC m=+4154.366310183" watchObservedRunningTime="2026-01-03 05:25:27.252526956 +0000 UTC m=+4154.369580161" Jan 03 05:25:32 crc kubenswrapper[4865]: I0103 05:25:32.995848 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:32 crc kubenswrapper[4865]: I0103 05:25:32.996223 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:33 crc kubenswrapper[4865]: I0103 05:25:33.040125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:33 crc kubenswrapper[4865]: I0103 05:25:33.362778 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:33 crc kubenswrapper[4865]: I0103 05:25:33.423522 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.297510 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zhkd" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="registry-server" containerID="cri-o://eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12" gracePeriod=2 Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.465298 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5974c4f958-8d8f6_9890e74a-cb62-411d-8cf0-ce88ffcc73e0/barbican-api/0.log" Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.735692 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5974c4f958-8d8f6_9890e74a-cb62-411d-8cf0-ce88ffcc73e0/barbican-api-log/0.log" Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.814172 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-588ffb9974-8pr57_e6d49d7a-9faf-486d-a98d-4067f581c56c/barbican-keystone-listener/0.log" Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.835592 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-588ffb9974-8pr57_e6d49d7a-9faf-486d-a98d-4067f581c56c/barbican-keystone-listener-log/0.log" Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.886754 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:35 crc kubenswrapper[4865]: I0103 05:25:35.988038 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b6fdb99ff-s5qqm_d53a478c-ba6a-4210-b219-66540ed365c6/barbican-worker/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.043769 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-854bk\" (UniqueName: \"kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk\") pod \"614e7364-962b-479a-9307-6f62dab5821e\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.043852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities\") pod \"614e7364-962b-479a-9307-6f62dab5821e\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.043885 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content\") pod \"614e7364-962b-479a-9307-6f62dab5821e\" (UID: \"614e7364-962b-479a-9307-6f62dab5821e\") " Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.044840 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities" (OuterVolumeSpecName: "utilities") pod "614e7364-962b-479a-9307-6f62dab5821e" (UID: "614e7364-962b-479a-9307-6f62dab5821e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.047901 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b6fdb99ff-s5qqm_d53a478c-ba6a-4210-b219-66540ed365c6/barbican-worker-log/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.055653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk" (OuterVolumeSpecName: "kube-api-access-854bk") pod "614e7364-962b-479a-9307-6f62dab5821e" (UID: "614e7364-962b-479a-9307-6f62dab5821e"). InnerVolumeSpecName "kube-api-access-854bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.101987 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "614e7364-962b-479a-9307-6f62dab5821e" (UID: "614e7364-962b-479a-9307-6f62dab5821e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.145691 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-854bk\" (UniqueName: \"kubernetes.io/projected/614e7364-962b-479a-9307-6f62dab5821e-kube-api-access-854bk\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.145727 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.145740 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614e7364-962b-479a-9307-6f62dab5821e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.251961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nf52v_32245d9a-04a2-4ee3-99ae-6c876313c5a1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.306574 4865 generic.go:334] "Generic (PLEG): container finished" podID="614e7364-962b-479a-9307-6f62dab5821e" containerID="eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12" exitCode=0 Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.306612 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerDied","Data":"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12"} Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.306638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zhkd" event={"ID":"614e7364-962b-479a-9307-6f62dab5821e","Type":"ContainerDied","Data":"fcf9b7562c2df2ee8d81bbdf48632c4edc8bf89b9dfcec654f6e2ca8e7aef57d"} Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.306655 4865 scope.go:117] "RemoveContainer" containerID="eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.306777 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zhkd" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.326533 4865 scope.go:117] "RemoveContainer" containerID="7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.340674 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/ceilometer-central-agent/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.349444 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.368427 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zhkd"] Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.373534 4865 scope.go:117] "RemoveContainer" containerID="1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.425197 4865 scope.go:117] "RemoveContainer" containerID="eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12" Jan 03 05:25:36 crc kubenswrapper[4865]: E0103 05:25:36.429092 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12\": container with ID starting with eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12 not found: ID does not exist" containerID="eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.429134 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12"} err="failed to get container status \"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12\": rpc error: code = NotFound desc = could not find container \"eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12\": container with ID starting with eb36e1e89070fbed575df85482913629ef782b42bc2e410a9c30b38c9d34cc12 not found: ID does not exist" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.429164 4865 scope.go:117] "RemoveContainer" containerID="7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5" Jan 03 05:25:36 crc kubenswrapper[4865]: E0103 05:25:36.430051 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5\": container with ID starting with 7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5 not found: ID does not exist" containerID="7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.430091 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5"} err="failed to get container status \"7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5\": rpc error: code = NotFound desc = could not find container \"7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5\": container with ID starting with 7f04416f3d39b65a05b10a5a488516ea92410440c9c4ca329fd427c37cf18ab5 not found: ID does not exist" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.431743 4865 scope.go:117] "RemoveContainer" containerID="1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2" Jan 03 05:25:36 crc kubenswrapper[4865]: E0103 05:25:36.440527 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2\": container with ID starting with 1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2 not found: ID does not exist" containerID="1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.440569 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2"} err="failed to get container status \"1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2\": rpc error: code = NotFound desc = could not find container \"1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2\": container with ID starting with 1478e3575f84d975482fdd261889bfa0f90b3357595d6274c737f1ff1e819fb2 not found: ID does not exist" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.499010 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/ceilometer-notification-agent/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.627408 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/sg-core/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.650057 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_209f508b-63d1-4413-95a8-8e539aaaa606/proxy-httpd/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.716799 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_74a06fab-e04b-4eca-b4b1-a9d69b526c1d/cinder-api/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.869604 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_74a06fab-e04b-4eca-b4b1-a9d69b526c1d/cinder-api-log/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.933884 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0bfb3310-1647-4ce9-887c-ccff650d42c5/cinder-scheduler/0.log" Jan 03 05:25:36 crc kubenswrapper[4865]: I0103 05:25:36.999062 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0bfb3310-1647-4ce9-887c-ccff650d42c5/probe/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.111961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-t2pzk_1811dd2a-9abd-466c-8c53-992d887c9321/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.168456 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614e7364-962b-479a-9307-6f62dab5821e" path="/var/lib/kubelet/pods/614e7364-962b-479a-9307-6f62dab5821e/volumes" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.193218 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bp6l4_65c55653-7592-4a07-bfc2-c6273437c99c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.307828 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/init/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.592452 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/init/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.595104 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-kxf4v_5a6c30d9-afcf-463b-a58f-dfc353d40686/dnsmasq-dns/0.log" Jan 03 05:25:37 crc kubenswrapper[4865]: I0103 05:25:37.596769 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-w7bcz_012b0a73-ea86-4b62-aad3-f6b4f63a32bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.369531 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_879b9de0-9d7c-46dc-b9b1-80c16cbebaa0/glance-log/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.440032 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_879b9de0-9d7c-46dc-b9b1-80c16cbebaa0/glance-httpd/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.611699 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_75b65689-ffa6-4b7c-b6c2-2f8e48f4a333/glance-log/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.615411 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_75b65689-ffa6-4b7c-b6c2-2f8e48f4a333/glance-httpd/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.737276 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c8ff89456-njqfs_565fcd3f-e73a-446a-b862-717cfb106bd1/horizon/0.log" Jan 03 05:25:38 crc kubenswrapper[4865]: I0103 05:25:38.890857 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lxb6x_c777a6c5-214d-40e9-b948-0e8d7a872578/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.122301 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6wxhc_bee03f5c-0eca-42a3-9d5d-ea38f06a775b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.171599 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c8ff89456-njqfs_565fcd3f-e73a-446a-b862-717cfb106bd1/horizon-log/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.311795 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b8664f56d-q48t7_d707525c-50ad-4b99-b59c-177bcae86c4c/keystone-api/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.400602 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29456941-p58zp_74c76b1f-c632-4f93-add1-5d8150f79004/keystone-cron/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.523291 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5668e757-efa2-4bac-a269-6e2cdd9dbfef/kube-state-metrics/0.log" Jan 03 05:25:39 crc kubenswrapper[4865]: I0103 05:25:39.705690 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-d9mpl_11b63132-1f33-4f08-9ddd-b705cc52d950/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:40 crc kubenswrapper[4865]: I0103 05:25:40.491672 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b946bd96f-ph9x6_a52305ce-1bb8-4ff4-9d6b-0cf652186e17/neutron-httpd/0.log" Jan 03 05:25:40 crc kubenswrapper[4865]: I0103 05:25:40.541586 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b946bd96f-ph9x6_a52305ce-1bb8-4ff4-9d6b-0cf652186e17/neutron-api/0.log" Jan 03 05:25:40 crc kubenswrapper[4865]: I0103 05:25:40.639181 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dd4wd_649b14b9-86dc-4aa5-9086-8a90038e573f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.075237 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_673e4191-53ee-4b5d-8bbe-289693bab15d/nova-api-log/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.259657 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a5fca133-2a81-47d7-8f20-62c55d10c3e6/nova-cell0-conductor-conductor/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.466024 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0c9ddcb0-e282-46fd-8d38-f5ec2ce4aeb0/nova-cell1-conductor-conductor/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.564180 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_673e4191-53ee-4b5d-8bbe-289693bab15d/nova-api-api/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.600684 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_02276ef3-b599-4ad0-be9e-690430084e13/nova-cell1-novncproxy-novncproxy/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.712452 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-7kv5x_0d3ac9c6-cfbf-4614-abb7-9a4338b90aab/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:41 crc kubenswrapper[4865]: I0103 05:25:41.921595 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_db9bcaee-003e-4fb7-b1b6-477c6583c4cc/nova-metadata-log/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.179978 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/mysql-bootstrap/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.337185 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_874d8744-eb6f-46f5-a6b7-35348b4f9359/nova-scheduler-scheduler/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.392717 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/mysql-bootstrap/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.420988 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_20843cac-0ba6-4f8f-b767-dd61fdb4f160/galera/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.660870 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/mysql-bootstrap/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.844637 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/mysql-bootstrap/0.log" Jan 03 05:25:42 crc kubenswrapper[4865]: I0103 05:25:42.850623 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2eba40ea-25a4-4887-aa6c-7feb32b91491/galera/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.024342 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a3ac055f-a850-4676-8bc2-0cd50509ff30/openstackclient/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.168523 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_db9bcaee-003e-4fb7-b1b6-477c6583c4cc/nova-metadata-metadata/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.200635 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gpwwp_334ea42d-9265-43f9-8c4c-fdf516746069/ovn-controller/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.374763 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qv9sm_c7da9b69-55d5-43a2-8e3c-2a25ca513ce6/openstack-network-exporter/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.572665 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server-init/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.741142 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.747044 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovsdb-server-init/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.752894 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5hbf4_5dc49e44-6dba-457d-b535-41a724d9640f/ovs-vswitchd/0.log" Jan 03 05:25:43 crc kubenswrapper[4865]: I0103 05:25:43.990451 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5bec493c-4a1f-49db-b9f3-d05bffd3541b/openstack-network-exporter/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.019372 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hc56k_306b5e02-b107-4d9b-9d6e-66c1d4a5ed11/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.048222 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5bec493c-4a1f-49db-b9f3-d05bffd3541b/ovn-northd/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.195850 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4b281b80-3b3a-4c04-a904-669d66ec4a74/openstack-network-exporter/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.213367 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4b281b80-3b3a-4c04-a904-669d66ec4a74/ovsdbserver-nb/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.420592 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9d25819-b14f-411e-a158-b9f315cf13d6/openstack-network-exporter/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.433825 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9d25819-b14f-411e-a158-b9f315cf13d6/ovsdbserver-sb/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.626082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-677444457b-ftr4x_f1045fbc-a935-4634-a207-aa8b027c9768/placement-api/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.760000 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/setup-container/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.764354 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-677444457b-ftr4x_f1045fbc-a935-4634-a207-aa8b027c9768/placement-log/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.956567 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/setup-container/0.log" Jan 03 05:25:44 crc kubenswrapper[4865]: I0103 05:25:44.968706 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/setup-container/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.016083 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a70654b8-fa7b-46ac-88ae-aa1ac0b74eb5/rabbitmq/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.174331 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/rabbitmq/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.204700 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3b26f2aa-ddac-4d96-b129-4738eee8fdb8/setup-container/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.288858 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xtm7p_f4adfc92-6b31-4832-9159-7ec2b85b018f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.454897 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vdnjr_76548eb3-2e5a-4325-85c3-3dac91f58d9b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.485715 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4qt5g_5f26a495-d92f-42c6-9395-d4cb6e0037f5/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.639102 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gm6fp_f635f6c7-e6b9-49f1-ba28-59fd66a1c425/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:45 crc kubenswrapper[4865]: I0103 05:25:45.757074 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lsx5q_4ae9c175-8601-467f-8f66-220277a0ffe1/ssh-known-hosts-edpm-deployment/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.032516 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5496856655-kc92p_638adfee-76ef-47db-bd03-1dbffb050ac8/proxy-server/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.062730 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x464g_6baae859-c56d-42e9-a3da-1e883afc3047/swift-ring-rebalance/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.063414 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5496856655-kc92p_638adfee-76ef-47db-bd03-1dbffb050ac8/proxy-httpd/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.235757 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-auditor/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.255345 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-reaper/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.334619 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-replicator/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.454732 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/account-server/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.471833 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-auditor/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.486300 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-replicator/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.546028 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-server/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.647494 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-auditor/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.886446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/container-updater/0.log" Jan 03 05:25:46 crc kubenswrapper[4865]: I0103 05:25:46.902041 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-expirer/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.008511 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-replicator/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.353017 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-server/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.408883 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/object-updater/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.462940 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/swift-recon-cron/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.466890 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_870f799a-79a7-40ff-9a9c-ecb096c9bfcb/rsync/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.694300 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4nlsj_842e7570-e53d-4a45-91cf-d37579440783/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.750448 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d51a1b58-dba9-4c1f-87bb-bce07ad57852/tempest-tests-tempest-tests-runner/0.log" Jan 03 05:25:47 crc kubenswrapper[4865]: I0103 05:25:47.901253 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_627c2ef6-ef77-4f3a-b7e9-a56643ffece7/test-operator-logs-container/0.log" Jan 03 05:25:48 crc kubenswrapper[4865]: I0103 05:25:48.030246 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lvf7w_81ff7929-70dd-400a-ae25-8e7425e5a9ae/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 03 05:25:56 crc kubenswrapper[4865]: I0103 05:25:56.615075 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_31864768-e1b4-438d-b88d-a5a8f9e89e5e/memcached/0.log" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.804207 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:12 crc kubenswrapper[4865]: E0103 05:26:12.805314 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="extract-utilities" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.805336 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="extract-utilities" Jan 03 05:26:12 crc kubenswrapper[4865]: E0103 05:26:12.805352 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="extract-content" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.805366 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="extract-content" Jan 03 05:26:12 crc kubenswrapper[4865]: E0103 05:26:12.805469 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="registry-server" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.805483 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="registry-server" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.805813 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="614e7364-962b-479a-9307-6f62dab5821e" containerName="registry-server" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.808254 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.822834 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.907047 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.907140 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz642\" (UniqueName: \"kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:12 crc kubenswrapper[4865]: I0103 05:26:12.907204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.009468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.009605 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz642\" (UniqueName: \"kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.009664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.010155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.010290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.038816 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz642\" (UniqueName: \"kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642\") pod \"redhat-marketplace-zb5cb\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.134814 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.612306 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:13 crc kubenswrapper[4865]: W0103 05:26:13.616594 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91f94b7f_c944_4c9b_ab51_fae4ac86f1fd.slice/crio-50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0 WatchSource:0}: Error finding container 50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0: Status 404 returned error can't find the container with id 50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0 Jan 03 05:26:13 crc kubenswrapper[4865]: I0103 05:26:13.645734 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerStarted","Data":"50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0"} Jan 03 05:26:14 crc kubenswrapper[4865]: I0103 05:26:14.659633 4865 generic.go:334] "Generic (PLEG): container finished" podID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerID="3af77f79408e13756d8cf5289baad54e90c8698d71d92e59cc60c4148d4994eb" exitCode=0 Jan 03 05:26:14 crc kubenswrapper[4865]: I0103 05:26:14.659982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerDied","Data":"3af77f79408e13756d8cf5289baad54e90c8698d71d92e59cc60c4148d4994eb"} Jan 03 05:26:15 crc kubenswrapper[4865]: I0103 05:26:15.616578 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:26:15 crc kubenswrapper[4865]: I0103 05:26:15.668476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerStarted","Data":"ea046c739351e51ae1d6fd04167fe4fa2221732ffebc733420e1872395bba350"} Jan 03 05:26:15 crc kubenswrapper[4865]: I0103 05:26:15.816875 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:26:15 crc kubenswrapper[4865]: I0103 05:26:15.836598 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:26:15 crc kubenswrapper[4865]: I0103 05:26:15.843553 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.081776 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/util/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.089294 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/extract/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.096622 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_142f043630a1069cee8b3619134013536070b9968e54d0bd278ac5ad94bhxxn_d63c1299-0c01-4495-bff5-70ea344821dd/pull/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.279980 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-pk2cc_e4095c6a-c9c9-42c0-b79e-a4f467563d27/manager/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.405830 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-8p5g8_675aef60-25dd-4113-a4cf-2f9b91a21150/manager/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.494242 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-sxcmb_13c73aaf-30a7-4530-afff-39ec069fccde/manager/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.677240 4865 generic.go:334] "Generic (PLEG): container finished" podID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerID="ea046c739351e51ae1d6fd04167fe4fa2221732ffebc733420e1872395bba350" exitCode=0 Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.677434 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerDied","Data":"ea046c739351e51ae1d6fd04167fe4fa2221732ffebc733420e1872395bba350"} Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.680022 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-2lcpm_45a74d9c-8e20-4f90-ad8b-8e139ad592fd/manager/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.685298 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-xk5t7_8ed18047-e002-419d-b950-2535d4d778c1/manager/0.log" Jan 03 05:26:16 crc kubenswrapper[4865]: I0103 05:26:16.832552 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-wlwkg_78091396-35cf-4a65-878b-02705fd27e09/manager/0.log" Jan 03 05:26:17 crc kubenswrapper[4865]: I0103 05:26:17.627424 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-2cfvd_1e7c8270-346b-429d-a775-abb648245a40/manager/0.log" Jan 03 05:26:17 crc kubenswrapper[4865]: I0103 05:26:17.688367 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerStarted","Data":"63c5c058efefd9536e9291ed4bd82dca3c08df0cacf092653ceb866d71bcef29"} Jan 03 05:26:17 crc kubenswrapper[4865]: I0103 05:26:17.714264 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zb5cb" podStartSLOduration=3.3225407860000002 podStartE2EDuration="5.714247631s" podCreationTimestamp="2026-01-03 05:26:12 +0000 UTC" firstStartedPulling="2026-01-03 05:26:14.662802543 +0000 UTC m=+4201.779855758" lastFinishedPulling="2026-01-03 05:26:17.054509418 +0000 UTC m=+4204.171562603" observedRunningTime="2026-01-03 05:26:17.707632252 +0000 UTC m=+4204.824685437" watchObservedRunningTime="2026-01-03 05:26:17.714247631 +0000 UTC m=+4204.831300816" Jan 03 05:26:17 crc kubenswrapper[4865]: I0103 05:26:17.815991 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-648996cf74-xqj6p_ce7f03b5-6280-4cd8-b3a6-865329b1b9ce/manager/0.log" Jan 03 05:26:17 crc kubenswrapper[4865]: I0103 05:26:17.873108 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-d6psw_0adfe2b3-9c76-4213-a856-e834ff2b24e0/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.028055 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-95md4_31214035-ff7b-4c07-87b8-52a98b09cd52/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.102356 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-vp89g_c4804e80-40f4-4f53-abfb-cafc1299f889/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.230815 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-dt8d6_dc3a99b3-36d9-41bc-94f7-74b47980f602/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.401643 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-fzx6g_76d489d2-17da-4af8-8fc5-d8ce6451a45c/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.443572 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-8jc6g_1785ab82-9c1c-41c0-aa07-0285dd49b221/manager/0.log" Jan 03 05:26:18 crc kubenswrapper[4865]: I0103 05:26:18.582514 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7j7bsq_aae3d614-123b-48a1-81fa-84f2c04b3923/manager/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.021731 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-m27bl_d69f6b83-1ca1-48a6-b701-033533fe63d0/registry-server/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.031472 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5954d5f7bc-c9qjh_b69ab3cd-b729-4ca3-83c2-989f7660d826/operator/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.646029 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-zfntt_500fd3bd-494f-428c-9437-a71add6116d6/manager/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.671204 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gsncl_a179b7cc-8be0-4956-83ad-7b8b8087103b/manager/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.849283 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8zvh8_ebc5bbec-4b11-47c8-a018-ddefda88a53b/operator/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.917068 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-9hsk5_ac1ae731-f2d2-436a-b3ef-641ebf79814d/manager/0.log" Jan 03 05:26:19 crc kubenswrapper[4865]: I0103 05:26:19.939582 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54cff86f68-wmvwl_fe531b80-38a8-4a91-95c3-cd9ffe4dee91/manager/0.log" Jan 03 05:26:20 crc kubenswrapper[4865]: I0103 05:26:20.072584 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-kgjdb_56f034ca-a5ed-4b5b-89ca-82ff95662601/manager/0.log" Jan 03 05:26:20 crc kubenswrapper[4865]: I0103 05:26:20.107639 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-fggfw_6af3b14c-24a5-4cf7-8cad-9583e2eb0b40/manager/0.log" Jan 03 05:26:20 crc kubenswrapper[4865]: I0103 05:26:20.202065 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-cshl4_8006ff7b-528f-4750-ba59-5aaacd35649b/manager/0.log" Jan 03 05:26:23 crc kubenswrapper[4865]: I0103 05:26:23.135735 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:23 crc kubenswrapper[4865]: I0103 05:26:23.136367 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:23 crc kubenswrapper[4865]: I0103 05:26:23.185073 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:23 crc kubenswrapper[4865]: I0103 05:26:23.770507 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:23 crc kubenswrapper[4865]: I0103 05:26:23.809162 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:25 crc kubenswrapper[4865]: I0103 05:26:25.748650 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zb5cb" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="registry-server" containerID="cri-o://63c5c058efefd9536e9291ed4bd82dca3c08df0cacf092653ceb866d71bcef29" gracePeriod=2 Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.760909 4865 generic.go:334] "Generic (PLEG): container finished" podID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerID="63c5c058efefd9536e9291ed4bd82dca3c08df0cacf092653ceb866d71bcef29" exitCode=0 Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.761000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerDied","Data":"63c5c058efefd9536e9291ed4bd82dca3c08df0cacf092653ceb866d71bcef29"} Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.761496 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb5cb" event={"ID":"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd","Type":"ContainerDied","Data":"50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0"} Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.761517 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50042670c2d6a0fc1f78bc191949f4186ee27c81360281bc8f5173e240310aa0" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.787729 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.860137 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content\") pod \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.860212 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities\") pod \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.860243 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz642\" (UniqueName: \"kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642\") pod \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\" (UID: \"91f94b7f-c944-4c9b-ab51-fae4ac86f1fd\") " Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.861872 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities" (OuterVolumeSpecName: "utilities") pod "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" (UID: "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.872920 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642" (OuterVolumeSpecName: "kube-api-access-fz642") pod "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" (UID: "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd"). InnerVolumeSpecName "kube-api-access-fz642". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.885248 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" (UID: "91f94b7f-c944-4c9b-ab51-fae4ac86f1fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.962076 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.962109 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz642\" (UniqueName: \"kubernetes.io/projected/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-kube-api-access-fz642\") on node \"crc\" DevicePath \"\"" Jan 03 05:26:26 crc kubenswrapper[4865]: I0103 05:26:26.962119 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:26:27 crc kubenswrapper[4865]: I0103 05:26:27.767660 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb5cb" Jan 03 05:26:27 crc kubenswrapper[4865]: I0103 05:26:27.785859 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:27 crc kubenswrapper[4865]: I0103 05:26:27.793260 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb5cb"] Jan 03 05:26:29 crc kubenswrapper[4865]: I0103 05:26:29.168048 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" path="/var/lib/kubelet/pods/91f94b7f-c944-4c9b-ab51-fae4ac86f1fd/volumes" Jan 03 05:26:41 crc kubenswrapper[4865]: I0103 05:26:41.543797 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mftcp_472b09fa-6397-442e-bd28-40d3dc0aff44/control-plane-machine-set-operator/0.log" Jan 03 05:26:41 crc kubenswrapper[4865]: I0103 05:26:41.684839 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s7cr9_af7bc8fa-5059-413a-b03b-8a95d39f786c/kube-rbac-proxy/0.log" Jan 03 05:26:41 crc kubenswrapper[4865]: I0103 05:26:41.697042 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s7cr9_af7bc8fa-5059-413a-b03b-8a95d39f786c/machine-api-operator/0.log" Jan 03 05:26:56 crc kubenswrapper[4865]: I0103 05:26:56.273266 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wklxs_e0da0830-24dd-48df-8f23-a1338aff9d50/cert-manager-controller/0.log" Jan 03 05:26:56 crc kubenswrapper[4865]: I0103 05:26:56.379055 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jzwg8_8b1175af-0486-4c47-8135-1b968223783e/cert-manager-cainjector/0.log" Jan 03 05:26:56 crc kubenswrapper[4865]: I0103 05:26:56.507303 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-jtp7f_f44bcedc-d643-4020-ac35-8777348583ef/cert-manager-webhook/0.log" Jan 03 05:27:12 crc kubenswrapper[4865]: I0103 05:27:12.796344 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hvkxm_5d65eeb2-6f33-4e5c-8470-f654c785e04f/nmstate-handler/0.log" Jan 03 05:27:12 crc kubenswrapper[4865]: I0103 05:27:12.875532 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-s4wgm_751c8c50-a8f2-456e-95d9-40d6e80de893/nmstate-console-plugin/0.log" Jan 03 05:27:12 crc kubenswrapper[4865]: I0103 05:27:12.981539 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-7c2qx_386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e/kube-rbac-proxy/0.log" Jan 03 05:27:13 crc kubenswrapper[4865]: I0103 05:27:13.067476 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-7c2qx_386bf1ab-ac9c-4ee2-8c8f-e57b98bc590e/nmstate-metrics/0.log" Jan 03 05:27:13 crc kubenswrapper[4865]: I0103 05:27:13.164135 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-hbxcc_d09a2e94-e2b5-4780-9269-564415d6627a/nmstate-operator/0.log" Jan 03 05:27:13 crc kubenswrapper[4865]: I0103 05:27:13.244077 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-r6w2b_44d7b71d-b005-4033-8bda-db39169f98a4/nmstate-webhook/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.392271 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g62v9_6f9c0210-2934-4cc6-aec8-f91055a4e30d/controller/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.539887 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g62v9_6f9c0210-2934-4cc6-aec8-f91055a4e30d/kube-rbac-proxy/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.711749 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.895540 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.927621 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.945012 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:27:29 crc kubenswrapper[4865]: I0103 05:27:29.973842 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.165577 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.175808 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.176161 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.229711 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.442072 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-reloader/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.451883 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/controller/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.465624 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-frr-files/0.log" Jan 03 05:27:30 crc kubenswrapper[4865]: I0103 05:27:30.491688 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/cp-metrics/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.487807 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/frr-metrics/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.492192 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/kube-rbac-proxy-frr/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.528805 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/kube-rbac-proxy/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.696513 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/reloader/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.772466 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-88j8j_78972b37-6300-455d-8b5d-7a2dbefa88f3/frr-k8s-webhook-server/0.log" Jan 03 05:27:31 crc kubenswrapper[4865]: I0103 05:27:31.914480 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8c8d45d7-vlx4d_ca1c3cce-5140-4f1a-bd20-5d1111357543/manager/0.log" Jan 03 05:27:32 crc kubenswrapper[4865]: I0103 05:27:32.116502 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-9b69d945b-jqfc9_e1b7d25b-22b1-46fd-98e2-8f5de4dfac93/webhook-server/0.log" Jan 03 05:27:32 crc kubenswrapper[4865]: I0103 05:27:32.182493 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhgwc_3ea65f95-887b-447b-b582-c1e91cdf44eb/kube-rbac-proxy/0.log" Jan 03 05:27:32 crc kubenswrapper[4865]: I0103 05:27:32.793399 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhgwc_3ea65f95-887b-447b-b582-c1e91cdf44eb/speaker/0.log" Jan 03 05:27:32 crc kubenswrapper[4865]: I0103 05:27:32.856702 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9d8bc_4309435e-08d9-4379-895f-d297474cd646/frr/0.log" Jan 03 05:27:40 crc kubenswrapper[4865]: I0103 05:27:40.739196 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:27:40 crc kubenswrapper[4865]: I0103 05:27:40.739780 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.336446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.500397 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.536367 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.559095 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.708491 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/util/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.710687 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/pull/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.733574 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4fd4dv_b2d5b470-3337-4613-937f-5e289519fc5a/extract/0.log" Jan 03 05:27:46 crc kubenswrapper[4865]: I0103 05:27:46.880831 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.028211 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.057108 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.075453 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.221076 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/util/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.268157 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/extract/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.278815 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8dmn42_cfe6b8e2-59c4-41ac-9665-54fab9d47829/pull/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.422046 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.533314 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.540530 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.584785 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.712081 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-utilities/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.717702 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/extract-content/0.log" Jan 03 05:27:47 crc kubenswrapper[4865]: I0103 05:27:47.952003 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.084010 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.133578 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.142035 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.309857 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tnlsn_3014cf2d-2752-436d-8878-4883e654999a/registry-server/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.380619 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-utilities/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.430524 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/extract-content/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.587557 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-86vxz_e0d2175d-f167-4a1f-a14e-df5e69557228/marketplace-operator/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.814003 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.958052 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gld9_44a95df7-b26c-417b-8756-88655a7b34d3/registry-server/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.973704 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:27:48 crc kubenswrapper[4865]: I0103 05:27:48.974083 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.010875 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.177445 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-content/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.213450 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/extract-utilities/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.373315 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mgfcx_e8a3dce4-c06d-4329-9c3c-71813d2c44d3/registry-server/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.403714 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.539027 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.549641 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.550810 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.751196 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-utilities/0.log" Jan 03 05:27:49 crc kubenswrapper[4865]: I0103 05:27:49.752552 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/extract-content/0.log" Jan 03 05:27:50 crc kubenswrapper[4865]: I0103 05:27:50.372453 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2f87v_61cc1eff-fb8d-4b15-bc9a-9be54c149447/registry-server/0.log" Jan 03 05:28:10 crc kubenswrapper[4865]: I0103 05:28:10.739370 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:28:10 crc kubenswrapper[4865]: I0103 05:28:10.739852 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:28:40 crc kubenswrapper[4865]: I0103 05:28:40.740069 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:28:40 crc kubenswrapper[4865]: I0103 05:28:40.741092 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:28:40 crc kubenswrapper[4865]: I0103 05:28:40.741140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:28:40 crc kubenswrapper[4865]: I0103 05:28:40.742031 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:28:40 crc kubenswrapper[4865]: I0103 05:28:40.742103 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c" gracePeriod=600 Jan 03 05:28:41 crc kubenswrapper[4865]: I0103 05:28:41.126095 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c" exitCode=0 Jan 03 05:28:41 crc kubenswrapper[4865]: I0103 05:28:41.126593 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c"} Jan 03 05:28:41 crc kubenswrapper[4865]: I0103 05:28:41.126639 4865 scope.go:117] "RemoveContainer" containerID="597bf9bec3fa9ba038442ae3d92c1d43e85a3a6fa700e47b178110e47bca88a8" Jan 03 05:28:42 crc kubenswrapper[4865]: I0103 05:28:42.142949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerStarted","Data":"2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa"} Jan 03 05:29:31 crc kubenswrapper[4865]: I0103 05:29:31.717713 4865 generic.go:334] "Generic (PLEG): container finished" podID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerID="ef18b966f38813877d73838343a6adb12b82604d15024def44342ed538e88caa" exitCode=0 Jan 03 05:29:31 crc kubenswrapper[4865]: I0103 05:29:31.717813 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" event={"ID":"6903dcf3-34bd-4744-8d41-115dc0e5c1e6","Type":"ContainerDied","Data":"ef18b966f38813877d73838343a6adb12b82604d15024def44342ed538e88caa"} Jan 03 05:29:31 crc kubenswrapper[4865]: I0103 05:29:31.719224 4865 scope.go:117] "RemoveContainer" containerID="ef18b966f38813877d73838343a6adb12b82604d15024def44342ed538e88caa" Jan 03 05:29:31 crc kubenswrapper[4865]: I0103 05:29:31.870365 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p2wlz_must-gather-jbtrp_6903dcf3-34bd-4744-8d41-115dc0e5c1e6/gather/0.log" Jan 03 05:29:42 crc kubenswrapper[4865]: I0103 05:29:42.765669 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p2wlz/must-gather-jbtrp"] Jan 03 05:29:42 crc kubenswrapper[4865]: I0103 05:29:42.766597 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="copy" containerID="cri-o://b3f2be84c9963c8d4242c66cd8edf8d25342c85169956b6db900d955b63d3b26" gracePeriod=2 Jan 03 05:29:42 crc kubenswrapper[4865]: I0103 05:29:42.779621 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p2wlz/must-gather-jbtrp"] Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.858636 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p2wlz_must-gather-jbtrp_6903dcf3-34bd-4744-8d41-115dc0e5c1e6/copy/0.log" Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.859433 4865 generic.go:334] "Generic (PLEG): container finished" podID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerID="b3f2be84c9963c8d4242c66cd8edf8d25342c85169956b6db900d955b63d3b26" exitCode=143 Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.859480 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27169b5fbca16199471cf061f6e40991f321aefd29d2e51a0672a0f5e6bd2291" Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.891507 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p2wlz_must-gather-jbtrp_6903dcf3-34bd-4744-8d41-115dc0e5c1e6/copy/0.log" Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.891963 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.926423 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output\") pod \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.926761 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sqjw\" (UniqueName: \"kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw\") pod \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\" (UID: \"6903dcf3-34bd-4744-8d41-115dc0e5c1e6\") " Jan 03 05:29:43 crc kubenswrapper[4865]: I0103 05:29:43.933109 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw" (OuterVolumeSpecName: "kube-api-access-7sqjw") pod "6903dcf3-34bd-4744-8d41-115dc0e5c1e6" (UID: "6903dcf3-34bd-4744-8d41-115dc0e5c1e6"). InnerVolumeSpecName "kube-api-access-7sqjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:29:44 crc kubenswrapper[4865]: I0103 05:29:44.029423 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sqjw\" (UniqueName: \"kubernetes.io/projected/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-kube-api-access-7sqjw\") on node \"crc\" DevicePath \"\"" Jan 03 05:29:44 crc kubenswrapper[4865]: I0103 05:29:44.085503 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6903dcf3-34bd-4744-8d41-115dc0e5c1e6" (UID: "6903dcf3-34bd-4744-8d41-115dc0e5c1e6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:29:44 crc kubenswrapper[4865]: I0103 05:29:44.130704 4865 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6903dcf3-34bd-4744-8d41-115dc0e5c1e6-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 03 05:29:44 crc kubenswrapper[4865]: I0103 05:29:44.867550 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p2wlz/must-gather-jbtrp" Jan 03 05:29:45 crc kubenswrapper[4865]: I0103 05:29:45.171757 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" path="/var/lib/kubelet/pods/6903dcf3-34bd-4744-8d41-115dc0e5c1e6/volumes" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.222690 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t"] Jan 03 05:30:00 crc kubenswrapper[4865]: E0103 05:30:00.223873 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="extract-content" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.223895 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="extract-content" Jan 03 05:30:00 crc kubenswrapper[4865]: E0103 05:30:00.223936 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="registry-server" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.223948 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="registry-server" Jan 03 05:30:00 crc kubenswrapper[4865]: E0103 05:30:00.223969 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="extract-utilities" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.223982 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="extract-utilities" Jan 03 05:30:00 crc kubenswrapper[4865]: E0103 05:30:00.224007 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="copy" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.224019 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="copy" Jan 03 05:30:00 crc kubenswrapper[4865]: E0103 05:30:00.224042 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="gather" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.224054 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="gather" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.224405 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="gather" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.224448 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f94b7f-c944-4c9b-ab51-fae4ac86f1fd" containerName="registry-server" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.224471 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6903dcf3-34bd-4744-8d41-115dc0e5c1e6" containerName="copy" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.225606 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.229120 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.229124 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.242818 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t"] Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.335475 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.335543 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npd7r\" (UniqueName: \"kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.335823 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.438445 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.438522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npd7r\" (UniqueName: \"kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.438578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.439685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.451420 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.463149 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npd7r\" (UniqueName: \"kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r\") pod \"collect-profiles-29456970-5ml9t\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.553200 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:00 crc kubenswrapper[4865]: I0103 05:30:00.839729 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t"] Jan 03 05:30:02 crc kubenswrapper[4865]: I0103 05:30:02.049190 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" event={"ID":"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7","Type":"ContainerStarted","Data":"def9b3b4fee66353db3ee170ffe8abb5b48997e2a25476678903b6aad847a82d"} Jan 03 05:30:02 crc kubenswrapper[4865]: I0103 05:30:02.049797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" event={"ID":"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7","Type":"ContainerStarted","Data":"3f97da7778258a379764bcdef7a4b377b3fcfdf840ddbc38c144e8f8c45684ae"} Jan 03 05:30:02 crc kubenswrapper[4865]: I0103 05:30:02.074090 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" podStartSLOduration=2.074067532 podStartE2EDuration="2.074067532s" podCreationTimestamp="2026-01-03 05:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 05:30:02.073851656 +0000 UTC m=+4429.190904851" watchObservedRunningTime="2026-01-03 05:30:02.074067532 +0000 UTC m=+4429.191120717" Jan 03 05:30:03 crc kubenswrapper[4865]: I0103 05:30:03.062882 4865 generic.go:334] "Generic (PLEG): container finished" podID="7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" containerID="def9b3b4fee66353db3ee170ffe8abb5b48997e2a25476678903b6aad847a82d" exitCode=0 Jan 03 05:30:03 crc kubenswrapper[4865]: I0103 05:30:03.062982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" event={"ID":"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7","Type":"ContainerDied","Data":"def9b3b4fee66353db3ee170ffe8abb5b48997e2a25476678903b6aad847a82d"} Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.552224 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.622841 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume\") pod \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.623095 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume\") pod \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.623132 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npd7r\" (UniqueName: \"kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r\") pod \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\" (UID: \"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7\") " Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.623322 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" (UID: "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.624285 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.629286 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r" (OuterVolumeSpecName: "kube-api-access-npd7r") pod "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" (UID: "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7"). InnerVolumeSpecName "kube-api-access-npd7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.630375 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" (UID: "7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.726788 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:04 crc kubenswrapper[4865]: I0103 05:30:04.726831 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npd7r\" (UniqueName: \"kubernetes.io/projected/7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7-kube-api-access-npd7r\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:05 crc kubenswrapper[4865]: I0103 05:30:05.102478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" event={"ID":"7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7","Type":"ContainerDied","Data":"3f97da7778258a379764bcdef7a4b377b3fcfdf840ddbc38c144e8f8c45684ae"} Jan 03 05:30:05 crc kubenswrapper[4865]: I0103 05:30:05.102534 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f97da7778258a379764bcdef7a4b377b3fcfdf840ddbc38c144e8f8c45684ae" Jan 03 05:30:05 crc kubenswrapper[4865]: I0103 05:30:05.102574 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456970-5ml9t" Jan 03 05:30:05 crc kubenswrapper[4865]: I0103 05:30:05.166906 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb"] Jan 03 05:30:05 crc kubenswrapper[4865]: I0103 05:30:05.168082 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456925-phpqb"] Jan 03 05:30:07 crc kubenswrapper[4865]: I0103 05:30:07.171626 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d9956d-43dd-4501-83e0-c576b055e696" path="/var/lib/kubelet/pods/73d9956d-43dd-4501-83e0-c576b055e696/volumes" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.008708 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:19 crc kubenswrapper[4865]: E0103 05:30:19.013208 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" containerName="collect-profiles" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.013263 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" containerName="collect-profiles" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.013657 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2b0a09-150a-42ad-8b9a-bcf9f30a9fb7" containerName="collect-profiles" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.016161 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.026315 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc2v\" (UniqueName: \"kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.026627 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.027116 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.046167 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.134454 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.134696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krc2v\" (UniqueName: \"kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.134896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.135158 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.135447 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.157332 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krc2v\" (UniqueName: \"kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v\") pod \"redhat-operators-mn8jg\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.363783 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:19 crc kubenswrapper[4865]: I0103 05:30:19.890919 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:20 crc kubenswrapper[4865]: I0103 05:30:20.245194 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerStarted","Data":"3d4b0076286298cd26d96c0e495fe92aa28c8f0f3f1382fe90abd17871567247"} Jan 03 05:30:20 crc kubenswrapper[4865]: I0103 05:30:20.245587 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerStarted","Data":"772f70274d782928e51b5e5e19b215bc4a366e949f4c478fe97e17df8d9a7a32"} Jan 03 05:30:21 crc kubenswrapper[4865]: I0103 05:30:21.256419 4865 generic.go:334] "Generic (PLEG): container finished" podID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" containerID="3d4b0076286298cd26d96c0e495fe92aa28c8f0f3f1382fe90abd17871567247" exitCode=0 Jan 03 05:30:21 crc kubenswrapper[4865]: I0103 05:30:21.256508 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerDied","Data":"3d4b0076286298cd26d96c0e495fe92aa28c8f0f3f1382fe90abd17871567247"} Jan 03 05:30:23 crc kubenswrapper[4865]: I0103 05:30:23.280661 4865 generic.go:334] "Generic (PLEG): container finished" podID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" containerID="b27421d5fff02bc81991da8624035bf82e205b29e8ba3453a3f1dbed862f2d05" exitCode=0 Jan 03 05:30:23 crc kubenswrapper[4865]: I0103 05:30:23.280788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerDied","Data":"b27421d5fff02bc81991da8624035bf82e205b29e8ba3453a3f1dbed862f2d05"} Jan 03 05:30:24 crc kubenswrapper[4865]: I0103 05:30:24.292419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerStarted","Data":"56c52d93347cf6f060171c46b6d4774277d039ba798baa0359986501c543ace0"} Jan 03 05:30:24 crc kubenswrapper[4865]: I0103 05:30:24.311129 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mn8jg" podStartSLOduration=3.8596618190000003 podStartE2EDuration="6.311093956s" podCreationTimestamp="2026-01-03 05:30:18 +0000 UTC" firstStartedPulling="2026-01-03 05:30:21.258568581 +0000 UTC m=+4448.375621766" lastFinishedPulling="2026-01-03 05:30:23.710000678 +0000 UTC m=+4450.827053903" observedRunningTime="2026-01-03 05:30:24.310639834 +0000 UTC m=+4451.427693029" watchObservedRunningTime="2026-01-03 05:30:24.311093956 +0000 UTC m=+4451.428147141" Jan 03 05:30:29 crc kubenswrapper[4865]: I0103 05:30:29.365285 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:29 crc kubenswrapper[4865]: I0103 05:30:29.366006 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:30 crc kubenswrapper[4865]: I0103 05:30:30.533282 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mn8jg" podUID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" containerName="registry-server" probeResult="failure" output=< Jan 03 05:30:30 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 03 05:30:30 crc kubenswrapper[4865]: > Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.187137 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.193997 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.214934 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.340623 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.340826 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmf6\" (UniqueName: \"kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.340887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.443761 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmf6\" (UniqueName: \"kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.444187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.444351 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.444755 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.445096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.470650 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmf6\" (UniqueName: \"kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6\") pod \"certified-operators-t2xjb\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:37 crc kubenswrapper[4865]: I0103 05:30:37.530813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.128057 4865 scope.go:117] "RemoveContainer" containerID="ef18b966f38813877d73838343a6adb12b82604d15024def44342ed538e88caa" Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.130651 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.239176 4865 scope.go:117] "RemoveContainer" containerID="fa747b1972c46801b1666cdba81273ef33068b96bda07bd744f30165ce7eb9c5" Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.301809 4865 scope.go:117] "RemoveContainer" containerID="b3f2be84c9963c8d4242c66cd8edf8d25342c85169956b6db900d955b63d3b26" Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.335584 4865 scope.go:117] "RemoveContainer" containerID="33392842e427b1e6e95057e3226dac74eab5e10ff93abe42c57075eb2e3efdfb" Jan 03 05:30:38 crc kubenswrapper[4865]: I0103 05:30:38.421117 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerStarted","Data":"05a246cdb4d49247c92fc95cf33b75343f6356d7ef6ce83a6100ac4731e07156"} Jan 03 05:30:39 crc kubenswrapper[4865]: I0103 05:30:39.429544 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:39 crc kubenswrapper[4865]: I0103 05:30:39.441031 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f6e125a-2ba6-4c0b-a181-db3075b7bc13" containerID="9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230" exitCode=0 Jan 03 05:30:39 crc kubenswrapper[4865]: I0103 05:30:39.441087 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerDied","Data":"9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230"} Jan 03 05:30:39 crc kubenswrapper[4865]: I0103 05:30:39.445893 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 05:30:39 crc kubenswrapper[4865]: I0103 05:30:39.511165 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:41 crc kubenswrapper[4865]: I0103 05:30:41.472637 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f6e125a-2ba6-4c0b-a181-db3075b7bc13" containerID="06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7" exitCode=0 Jan 03 05:30:41 crc kubenswrapper[4865]: I0103 05:30:41.473299 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerDied","Data":"06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7"} Jan 03 05:30:41 crc kubenswrapper[4865]: I0103 05:30:41.759669 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:41 crc kubenswrapper[4865]: I0103 05:30:41.759961 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mn8jg" podUID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" containerName="registry-server" containerID="cri-o://56c52d93347cf6f060171c46b6d4774277d039ba798baa0359986501c543ace0" gracePeriod=2 Jan 03 05:30:42 crc kubenswrapper[4865]: I0103 05:30:42.483538 4865 generic.go:334] "Generic (PLEG): container finished" podID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" containerID="56c52d93347cf6f060171c46b6d4774277d039ba798baa0359986501c543ace0" exitCode=0 Jan 03 05:30:42 crc kubenswrapper[4865]: I0103 05:30:42.483639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerDied","Data":"56c52d93347cf6f060171c46b6d4774277d039ba798baa0359986501c543ace0"} Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.060551 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.091764 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krc2v\" (UniqueName: \"kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v\") pod \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.091918 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content\") pod \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.091999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities\") pod \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\" (UID: \"ff80e90e-7ce3-49bc-8257-0b6feda46fa5\") " Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.094663 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities" (OuterVolumeSpecName: "utilities") pod "ff80e90e-7ce3-49bc-8257-0b6feda46fa5" (UID: "ff80e90e-7ce3-49bc-8257-0b6feda46fa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.098679 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v" (OuterVolumeSpecName: "kube-api-access-krc2v") pod "ff80e90e-7ce3-49bc-8257-0b6feda46fa5" (UID: "ff80e90e-7ce3-49bc-8257-0b6feda46fa5"). InnerVolumeSpecName "kube-api-access-krc2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.196057 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krc2v\" (UniqueName: \"kubernetes.io/projected/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-kube-api-access-krc2v\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.196101 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.221174 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff80e90e-7ce3-49bc-8257-0b6feda46fa5" (UID: "ff80e90e-7ce3-49bc-8257-0b6feda46fa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.298374 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff80e90e-7ce3-49bc-8257-0b6feda46fa5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.505151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8jg" event={"ID":"ff80e90e-7ce3-49bc-8257-0b6feda46fa5","Type":"ContainerDied","Data":"772f70274d782928e51b5e5e19b215bc4a366e949f4c478fe97e17df8d9a7a32"} Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.505225 4865 scope.go:117] "RemoveContainer" containerID="56c52d93347cf6f060171c46b6d4774277d039ba798baa0359986501c543ace0" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.505413 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8jg" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.532453 4865 scope.go:117] "RemoveContainer" containerID="b27421d5fff02bc81991da8624035bf82e205b29e8ba3453a3f1dbed862f2d05" Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.558055 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.568807 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mn8jg"] Jan 03 05:30:43 crc kubenswrapper[4865]: I0103 05:30:43.569080 4865 scope.go:117] "RemoveContainer" containerID="3d4b0076286298cd26d96c0e495fe92aa28c8f0f3f1382fe90abd17871567247" Jan 03 05:30:44 crc kubenswrapper[4865]: I0103 05:30:44.515242 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerStarted","Data":"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794"} Jan 03 05:30:44 crc kubenswrapper[4865]: I0103 05:30:44.544253 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2xjb" podStartSLOduration=3.455477277 podStartE2EDuration="7.54423307s" podCreationTimestamp="2026-01-03 05:30:37 +0000 UTC" firstStartedPulling="2026-01-03 05:30:39.445307734 +0000 UTC m=+4466.562360959" lastFinishedPulling="2026-01-03 05:30:43.534063527 +0000 UTC m=+4470.651116752" observedRunningTime="2026-01-03 05:30:44.542248196 +0000 UTC m=+4471.659301391" watchObservedRunningTime="2026-01-03 05:30:44.54423307 +0000 UTC m=+4471.661286255" Jan 03 05:30:45 crc kubenswrapper[4865]: I0103 05:30:45.169263 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff80e90e-7ce3-49bc-8257-0b6feda46fa5" path="/var/lib/kubelet/pods/ff80e90e-7ce3-49bc-8257-0b6feda46fa5/volumes" Jan 03 05:30:47 crc kubenswrapper[4865]: I0103 05:30:47.531801 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:47 crc kubenswrapper[4865]: I0103 05:30:47.532513 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:47 crc kubenswrapper[4865]: I0103 05:30:47.618864 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:57 crc kubenswrapper[4865]: I0103 05:30:57.630058 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:57 crc kubenswrapper[4865]: I0103 05:30:57.716506 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:57 crc kubenswrapper[4865]: I0103 05:30:57.716712 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2xjb" podUID="8f6e125a-2ba6-4c0b-a181-db3075b7bc13" containerName="registry-server" containerID="cri-o://75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794" gracePeriod=2 Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.196799 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.315375 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmf6\" (UniqueName: \"kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6\") pod \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.315502 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities\") pod \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.315685 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content\") pod \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\" (UID: \"8f6e125a-2ba6-4c0b-a181-db3075b7bc13\") " Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.317546 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities" (OuterVolumeSpecName: "utilities") pod "8f6e125a-2ba6-4c0b-a181-db3075b7bc13" (UID: "8f6e125a-2ba6-4c0b-a181-db3075b7bc13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.339635 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6" (OuterVolumeSpecName: "kube-api-access-rfmf6") pod "8f6e125a-2ba6-4c0b-a181-db3075b7bc13" (UID: "8f6e125a-2ba6-4c0b-a181-db3075b7bc13"). InnerVolumeSpecName "kube-api-access-rfmf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.386466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f6e125a-2ba6-4c0b-a181-db3075b7bc13" (UID: "8f6e125a-2ba6-4c0b-a181-db3075b7bc13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.419159 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.419566 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfmf6\" (UniqueName: \"kubernetes.io/projected/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-kube-api-access-rfmf6\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.419587 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f6e125a-2ba6-4c0b-a181-db3075b7bc13-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.687406 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f6e125a-2ba6-4c0b-a181-db3075b7bc13" containerID="75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794" exitCode=0 Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.687468 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerDied","Data":"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794"} Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.687518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2xjb" event={"ID":"8f6e125a-2ba6-4c0b-a181-db3075b7bc13","Type":"ContainerDied","Data":"05a246cdb4d49247c92fc95cf33b75343f6356d7ef6ce83a6100ac4731e07156"} Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.687527 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2xjb" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.687550 4865 scope.go:117] "RemoveContainer" containerID="75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.717535 4865 scope.go:117] "RemoveContainer" containerID="06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.744193 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.761246 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2xjb"] Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.771167 4865 scope.go:117] "RemoveContainer" containerID="9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.822056 4865 scope.go:117] "RemoveContainer" containerID="75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794" Jan 03 05:30:58 crc kubenswrapper[4865]: E0103 05:30:58.823436 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794\": container with ID starting with 75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794 not found: ID does not exist" containerID="75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.823622 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794"} err="failed to get container status \"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794\": rpc error: code = NotFound desc = could not find container \"75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794\": container with ID starting with 75265e64dcda24a36b0803a19afe786ef0b60505b2ebd50a6f0364eec6066794 not found: ID does not exist" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.823814 4865 scope.go:117] "RemoveContainer" containerID="06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7" Jan 03 05:30:58 crc kubenswrapper[4865]: E0103 05:30:58.825062 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7\": container with ID starting with 06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7 not found: ID does not exist" containerID="06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.825129 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7"} err="failed to get container status \"06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7\": rpc error: code = NotFound desc = could not find container \"06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7\": container with ID starting with 06c31557311fc75097662137ae315002a5e622483c8b043d8610d3c09ccc62c7 not found: ID does not exist" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.825172 4865 scope.go:117] "RemoveContainer" containerID="9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230" Jan 03 05:30:58 crc kubenswrapper[4865]: E0103 05:30:58.825963 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230\": container with ID starting with 9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230 not found: ID does not exist" containerID="9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230" Jan 03 05:30:58 crc kubenswrapper[4865]: I0103 05:30:58.826028 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230"} err="failed to get container status \"9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230\": rpc error: code = NotFound desc = could not find container \"9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230\": container with ID starting with 9d3edd606cd22716e2a1b886760f96edf775bad95ec7396e7e13bbe6c721f230 not found: ID does not exist" Jan 03 05:30:59 crc kubenswrapper[4865]: I0103 05:30:59.176000 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6e125a-2ba6-4c0b-a181-db3075b7bc13" path="/var/lib/kubelet/pods/8f6e125a-2ba6-4c0b-a181-db3075b7bc13/volumes" Jan 03 05:31:10 crc kubenswrapper[4865]: I0103 05:31:10.739333 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:31:10 crc kubenswrapper[4865]: I0103 05:31:10.740025 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:31:38 crc kubenswrapper[4865]: I0103 05:31:38.453979 4865 scope.go:117] "RemoveContainer" containerID="8fddbede74aa29f7885c3e264650c3f2ceb4195755013d67456f0d660e30cc70" Jan 03 05:31:40 crc kubenswrapper[4865]: I0103 05:31:40.739654 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:31:40 crc kubenswrapper[4865]: I0103 05:31:40.741802 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:32:10 crc kubenswrapper[4865]: I0103 05:32:10.740044 4865 patch_prober.go:28] interesting pod/machine-config-daemon-mh2rc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 05:32:10 crc kubenswrapper[4865]: I0103 05:32:10.740827 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 05:32:10 crc kubenswrapper[4865]: I0103 05:32:10.740896 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" Jan 03 05:32:10 crc kubenswrapper[4865]: I0103 05:32:10.741937 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa"} pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 05:32:10 crc kubenswrapper[4865]: I0103 05:32:10.742032 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" containerName="machine-config-daemon" containerID="cri-o://2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa" gracePeriod=600 Jan 03 05:32:11 crc kubenswrapper[4865]: E0103 05:32:11.031269 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:32:11 crc kubenswrapper[4865]: I0103 05:32:11.518200 4865 generic.go:334] "Generic (PLEG): container finished" podID="122690aa-cb57-4839-8349-30c5221c5b42" containerID="2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa" exitCode=0 Jan 03 05:32:11 crc kubenswrapper[4865]: I0103 05:32:11.518282 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" event={"ID":"122690aa-cb57-4839-8349-30c5221c5b42","Type":"ContainerDied","Data":"2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa"} Jan 03 05:32:11 crc kubenswrapper[4865]: I0103 05:32:11.518552 4865 scope.go:117] "RemoveContainer" containerID="a6a22db5e4b97e93086f27b4ecd545339e0be3ad2ab0bae428941f6a6737412c" Jan 03 05:32:11 crc kubenswrapper[4865]: I0103 05:32:11.519125 4865 scope.go:117] "RemoveContainer" containerID="2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa" Jan 03 05:32:11 crc kubenswrapper[4865]: E0103 05:32:11.519460 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:32:25 crc kubenswrapper[4865]: I0103 05:32:25.156428 4865 scope.go:117] "RemoveContainer" containerID="2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa" Jan 03 05:32:25 crc kubenswrapper[4865]: E0103 05:32:25.157522 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:32:37 crc kubenswrapper[4865]: I0103 05:32:37.156515 4865 scope.go:117] "RemoveContainer" containerID="2659b526deeb20d88dc6016ab7f0fe8d915c088fe78db1b6134e8b10803fe3fa" Jan 03 05:32:37 crc kubenswrapper[4865]: E0103 05:32:37.157323 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mh2rc_openshift-machine-config-operator(122690aa-cb57-4839-8349-30c5221c5b42)\"" pod="openshift-machine-config-operator/machine-config-daemon-mh2rc" podUID="122690aa-cb57-4839-8349-30c5221c5b42" Jan 03 05:32:38 crc kubenswrapper[4865]: I0103 05:32:38.562369 4865 scope.go:117] "RemoveContainer" containerID="3af77f79408e13756d8cf5289baad54e90c8698d71d92e59cc60c4148d4994eb" Jan 03 05:32:38 crc kubenswrapper[4865]: I0103 05:32:38.587025 4865 scope.go:117] "RemoveContainer" containerID="ea046c739351e51ae1d6fd04167fe4fa2221732ffebc733420e1872395bba350" Jan 03 05:32:38 crc kubenswrapper[4865]: I0103 05:32:38.667737 4865 scope.go:117] "RemoveContainer" containerID="63c5c058efefd9536e9291ed4bd82dca3c08df0cacf092653ceb866d71bcef29"